You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Documents/panel.md
+26-5Lines changed: 26 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,17 +1,29 @@
1
-
# Panel CSV Parser
1
+
# Panel CSV Loader
2
2
3
3
## Overview
4
-
The panel parser is designed to read a CSV file and load the data onto the HMDA-Platform. The CSV file should use the `|` (pipe) delimiter, and should include a header row as the first line.
4
+
The panel loader is designed to read a CSV file and load the data onto the HMDA-Platform. The CSV file should use the `|` (pipe) delimiter, and should include a header row as the first line.
5
+
6
+
## Environment Variables
7
+
There are two environment variables used by the panel loader. Both must be set correctly in order for the data to be sent to the admin API.
8
+
9
+
For testing locally, no changes need to be made. The defaults for both of these variables will point to the correct local admin API.
10
+
11
+
For loading panel data into a remote system, you'll need to set the following environment variables:
12
+
```shell
13
+
>export HMDA_HTTP_ADMIN_HOST={ip address}
14
+
>export HMDA_HTTP_ADMIN_PORT={port #}
15
+
```
5
16
6
17
## Running the parser
7
-
An example panel file is located at `hmda-platform/panel/src/main/resources/inst_data_2017_dummy.csv`
18
+
A small example file is located at `panel/src/main/resources/inst_data_2017_dummy.csv`
8
19
9
-
In order for the panel data to be loaded, the API project must be up and running, along with Docker containers running Cassandra and Zookeper. In a terminal, execute the following commands:
20
+
The real panel file is located at `panel/src/main/resources/inst_data_2017.csv`
21
+
22
+
In order for the panel data to be loaded locally, the API project must be up and running, along with Docker containers running Cassandra and Zookeper. Otherwise, no other running services are needed (but make sure your environment variables are set). In a terminal, execute the following commands:
10
23
11
24
```shell
12
25
> sbt
13
26
sbt> project panel
14
-
sbt> clean
15
27
sbt> run /path/to/panelData.csv
16
28
```
17
29
@@ -21,6 +33,15 @@ sbt> clean
21
33
sbt> assembly
22
34
```
23
35
Then the panel loader can be run with `java -jar panel/target/scala-2.12/panel.jar path/to/institution_file.csv`
36
+
37
+
## Error codes
38
+
There are four ways the panel loader can fail. The exit code and error message should tell you what happened.
39
+
40
+
1. There were no command line arguments passed to the loader
41
+
2. The path passed to the loader didn't point to a file
42
+
3. The call to `institutions/create` didn't return the correct response. This can indicate that you don't have the correct environment variables set, or that something is wrong with the hmda-platform.
43
+
4. The loader didn't finish processing all the institutions. This will happen when running the real panel file, but unsure as to why this happens.
44
+
24
45
## Testing
25
46
Make sure your authorization header is updated with a few real `id_rssd` fields from the given file. This can be found in the API log output (first field argument in the `InstitutionQuery` object), or in the CSV file (seventh field).
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -104,7 +104,7 @@ $ sbt
104
104
project panel
105
105
run <full local path to sample file>
106
106
```
107
-
A sample file is located in the following folder: `hmda-platform/persistence/src/main/resources/demoInstitutions.csv`
107
+
A sample file is located in the following folder: `panel/src/main/resources/inst_data_2017_dummy.csv`
108
108
109
109
110
110
* In order to support the read side, a local PostgreSQL and Cassandra server are needed. Assuming it runs on the default port, on the same machine as the API, the following environment variable needs to be set:
0 commit comments