You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
az aks create -g <RESOURCE_GROUP_NAME> -n <NAME> -s <AGENT_SIZE> -c <AGENT_COUNT> -l <LOCATION> --generate-ssh-keys
28
+
```
26
29
27
30
28
31
# KubeFlow installation
29
32
30
33
Create user credentials. You only need to run this command once.
31
-
-az aks get-credentials -n <NAME> -g <RESOURCE_GROUP_NAME>
32
-
34
+
```
35
+
az aks get-credentials -n <NAME> -g <RESOURCE_GROUP_NAME>
36
+
```
33
37
Download the kfctl v1.2.0 release from the [Kubeflow releases page](https://github.com/kubeflow/kfctl/releases/tag/v1.2.0)
34
38
35
-
Unpack the tar ball
36
-
-tar -xvf kfctl_v1.2.0_<platform>.tar.gz
37
-
39
+
Unpack the tar ball.
40
+
```
41
+
tar -xvf kfctl_v1.2.0_<platform>.tar.gz
42
+
```
38
43
Run the following commands to set up and deploy Kubeflow in order. The code below includes an optional command to add the binary kfctl to your path. If you don’t add the binary to your path, you must use the full path to the kfctl binary each time you run it.
39
44
45
+
```
46
+
export PATH=$PATH:"<path-to-kfctl>
40
47
41
-
- export PATH=$PATH:"<path-to-kfctl>
42
-
43
-
- export KF_NAME=<your choice of name for the Kubeflow deployment>
48
+
export KF_NAME=<your choice of name for the Kubeflow deployment>
Run this command to check that the resources have been deployed correctly in namespace kubeflow:
62
+
63
+
```
64
+
kubectl get all -n kubeflow
65
+
```
55
66
56
-
Run this command to check that the resources have been deployed correctly in namespace kubeflow
57
-
58
-
- kubectl get all -n kubeflow
59
-
60
-
Open the KubeFlow Dashboard , the default installation does not create an external endpoint but you can use port-forwarding to visit your cluster. Run the following command
Open the KubeFlow Dashboard , the default installation does not create an external endpoint but you can use port-forwarding to visit your cluster. Run the following command:
Copy file name to clipboardExpand all lines: docs/create_index_from_csv.md
+8-15Lines changed: 8 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,16 +1,12 @@
1
1
### Create an Azure search index from a csv file
2
2
:sparkles: Here we outline how to create an Azure search index from a CSV file summarizing funded award data exported from Reporter.nih.gov
3
3
4
-
### 1) Generate input CSV
4
+
### 1) Download input CSV
5
5
:ear: If you already have your csv ready, skip to section (2)
6
6
7
-
Our input data comes from the csv export option for [Reporter.nih.gov](https://reporter.nih.gov/). Navigate to reporter.nih.gov and select `Advanced Search`. Input your search parameters. In this case we filtered for awards made by NIGMS in FY 23. In the top right, select `Export`.
7
+
Download this public [csv file](https://www.kaggle.com/datasets/henryshan/2023-data-scientists-salary?resource=download) from kaggle to use as our input.
8
8
9
-
Select your export columns and make sure you export as a csv. In the example input data file we only selected 'Title', 'Project_ID', and 'Total_Cost', although a few other columns were also exported.
10
-
11
-

12
-
13
-
If using the UI to upload, you need to make two small edits to the csv that gets exported. First, remove the extra comma at the end of each line. Second, replace the spaces in column names in the header row. You can do this using something like Python, or just do a find/replace in a text editor.
9
+

14
10
15
11
### 2) Import data into Azure blob storage
16
12
:ear: If you already added your data to blob storage skip to section (3)
@@ -35,13 +31,13 @@ Navigate to AI Search and [create a new search](https://learn.microsoft.com/en-u
35
31
36
32

37
33
38
-
Click `Import data`
34
+
Click `Import data`.
39
35
40
36

41
37
42
38
Now fill out all the necessary parameters.
43
39
+ Data Source: Select `Azure Blob Storage`. New options will drop down.
44
-
+ Data source name: This can be anything, but go with something like `grant-data`.
40
+
+ Data source name: This can be anything, but go with something like `ds-salaries-data`.
45
41
+ Data to extract: Select `Content and metadata`.
46
42
+ Parsing mode: Select `Delimited text`. Check the `First Line Contains Header` box and leave `Delimiter Character` as `,`.
47
43
+ Connection string: Click `Choose an existing connection` and navigate to your storage account and container.
@@ -51,24 +47,21 @@ Now fill out all the necessary parameters.
51
47
+ Description: *Optional*.
52
48
+ If you get errors when trying to go to the next screen, make sure you don't have trailing commas in your csv, and there are not spaces in the header names. If this happens, fix those errors, re-upload to blob storage, and then try again!
53
49
54
-

50
+

55
51
56
52
Skip ahead to `Customize target index`.
57
53
+ Give your index a name.
58
54
+ Make `Project_Number` your key.
59
55
+ Make sure the expected column names are present under fields. For the columns you expect to use, select `Retrievable` and `Searchable`. If you select all the columns you will just pay for indexing you are not using.
Navigate to `Indexes` on the left panel and wait until your index shows as many documents as you have lines in your file. It will read 0 documents until it is finished indexing. The example 500 line csv takes about one minute.
68
64
69
-

70
-
71
-
72
65
And that is it! Now return to [the tutorial notebook to run queries against this csv using GPT-4](/notebooks/GenAI/notebooks/AzureAIStudio_index_structured_with_console.ipynb).
0 commit comments