Skip to content

Commit faf8e49

Browse files
committed
new clarification on proc use
1 parent 1888b1d commit faf8e49

File tree

1 file changed

+114
-0
lines changed

1 file changed

+114
-0
lines changed
Lines changed: 114 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
# SSD proc-based deployment, how to run
2+
3+
## What's in this folder
4+
```
5+
.
6+
├─ 00_install_all.sql SQLCMD bootstrap, creates or updates all procs
7+
├─ 00_install_all_windows.sql Windows slashes variant of the same
8+
├─ populate_ssd_data_warehouse.sql *Orchestrator, declares shared vars and runs each proc*
9+
└─ procs/
10+
├─ proc_ssd_address.sql
11+
├─ proc_ssd_disability.sql
12+
└─ … one file per table
13+
```
14+
15+
## Quick start in Azure Data Studio or SSMS
16+
17+
### Step 1, install or update procedures
18+
1. Open `00_install_all.sql`
19+
2. Enable SQLCMD mode, Command Palette, Toggle SQLCMD, or Settings, search for sqlcmd, enable for MSSQL
20+
3. Connect the editor to the target database
21+
4. Run. This includes every `procs/*.sql` and compiles the procedures. It also includes `populate_ssd_data_warehouse.sql` so it is available in the editor
22+
23+
Tip, you can also open any single file in `procs` and run it if you only changed one table.
24+
25+
### Step 2, run the orchestrator
26+
1. Open `populate_ssd_data_warehouse.sql`
27+
2. Connect to your target reporting database(e.g. hdm_local) or add `USE <dbname>;` at the top
28+
3. Adjust the variables as needed
29+
- `@src_schema`, empty string means use the session default schema for proc names
30+
- `@ssd_timeframe_years`, `@ssd_sub1_range_years`, date window values
31+
4. Run. The orchestrator prefers `_custom` overrides if present, for example `proc_ssd_person_custom`
32+
33+
34+
## Quick start run from the command line(if preferred)
35+
36+
Use the `sqlcmd` utility. The includes in `00_install_all.sql` are relative to the working directory.
37+
38+
Windows PowerShell
39+
```powershell
40+
# install or update procs
41+
sqlcmd -S yourserver -d yourdb -E -b -i ".\00_install_all_windows.sql"
42+
43+
# run the orchestrator
44+
sqlcmd -S yourserver -d yourdb -E -b -i ".\populate_ssd_data_warehouse.sql"
45+
```
46+
47+
Linux or macOS
48+
```bash
49+
# install or update procs
50+
sqlcmd -S yourserver -d yourdb -G -b -i ./00_install_all.sql
51+
# or, if using SQL auth
52+
# sqlcmd -S yourserver -d yourdb -U youruser -P 'yourpass' -b -i ./00_install_all.sql
53+
54+
# run the orchestrator
55+
sqlcmd -S yourserver -d yourdb -G -b -i ./populate_ssd_data_warehouse.sql
56+
```
57+
58+
Notes
59+
- `-E` uses Windows integrated auth on Windows. `-G` uses Azure AD. `-U` and `-P` use SQL auth
60+
- Keep the working directory at the folder that contains `00_install_all.sql`, so the relative `:r "procs/..."` lines resolve correctly
61+
62+
## SQL Server Agent job pattern
63+
64+
Create a two step job.
65+
66+
1. Step type CmdExec, runs `sqlcmd` to execute `00_install_all.sql`. This compiles or updates the procedures. Use a working directory that contains the files or pass full paths
67+
2. Step type CmdExec, runs `sqlcmd` to execute `populate_ssd_data_warehouse.sql`
68+
69+
You can also merge into one step by chaining two `sqlcmd` invocations in a small `.cmd` file and calling that from the job step.
70+
71+
## Custom per LA overrides
72+
73+
- If within your LA a file `procs/proc_ssd_<table>_custom.sql` exists and compiles to a procedure named the same, the orchestrator will prioritise|prefer it. This enables LA's to locally define their requirements for specific SSD table definitions. We hope that this also provides greater/easier oversight/change management if/when changes in the D2I source SSD occur.
74+
- The orchestrator passes the same parameter set to both base and custom versions, so your custom proc can accept the same parameters and ignore ones it does not need
75+
76+
## Calling a single table proc by hand
77+
78+
Every generated procedure accepts a shared parameter set and has safe defaults, so you can run one directly.
79+
80+
```sql
81+
EXEC proc_ssd_disability
82+
@src_db = N'HDM',
83+
@src_schema = N'',
84+
@ssd_timeframe_years = 6,
85+
@ssd_sub1_range_years = 1,
86+
@today_date = CONVERT(date, GETDATE()),
87+
@today_dt = GETDATE(),
88+
@ssd_window_start = DATEADD(year, -6, CONVERT(date, GETDATE())),
89+
@ssd_window_end = CONVERT(date, GETDATE()),
90+
@CaseloadLastSept30th = DATEFROMPARTS(YEAR(GETDATE()) - CASE WHEN GETDATE() <= DATEFROMPARTS(YEAR(GETDATE()), 9, 30) THEN 1 ELSE 0 END, 9, 30),
91+
@CaseloadTimeframeStartDate = DATEADD(year, -6, DATEFROMPARTS(YEAR(GETDATE()) - CASE WHEN GETDATE() <= DATEFROMPARTS(YEAR(GETDATE()), 9, 30) THEN 1 ELSE 0 END, 9, 30));
92+
```
93+
94+
If you omit parameters, defaults inside the proc will fill them.
95+
96+
## Schema handling
97+
98+
- Procedure names, the orchestrator(populate_ssd_data_warehouse.sql) will use `@src_schema` when looking up and executing `proc_*` and `proc_*_custom`. Set `@src_schema = N''` to execute without a schema prefix, this uses the default schema of the login.
99+
- Table DDL inside each proc is unqualified, tables are created or truncated in the default schema of the execution context. If you need to force a schema for tables, connect with a user that has that default schema or modify the proc body locally to qualify the target tables.
100+
101+
## Troubleshooting
102+
103+
- Error, could not find stored procedure, you have not run `00_install_all.sql` against this database, or you ran it in another database, or SQLCMD mode was off and includes did not run. Install again with SQLCMD on, then run the orchestrator.
104+
- Includes do not work, enable SQLCMD mode in the editor tab, check that the status bar shows SQLCMD.
105+
- The bootstrap cannot find files, open `00_install_all.sql` from the same folder that contains the `procs` subfolder so the relative `:r` lines resolve.
106+
- You still see `ssd_development.` in DDL, regenerate files. The generator strips that prefix. If you hand edited any procs, search and remove those prefixes.
107+
108+
109+
110+
## D2I Admin only - Updating to a new source script
111+
112+
- Drop the new source `.sql` into the `ssd_deployment_individual_files` folder
113+
- Run the Python generator. It picks the newest `.sql` automatically
114+
- In Azure Data Studio, run `00_install_all.sql` with SQLCMD mode on, then run `populate_ssd_data_warehouse.sql`

0 commit comments

Comments
 (0)