Skip to content

Commit 832780a

Browse files
committed
Fixed docs code formating and linting
1 parent b1f5304 commit 832780a

File tree

3 files changed

+11
-11
lines changed

3 files changed

+11
-11
lines changed

docs/content/getting_started/runoboardingopt1.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ draft: false
2727
8. Click Add.
2828

2929
9. In Parameters, select keyword argument then select JSON. Past below json parameters with :
30-
```
30+
```json
3131
{
3232
"onboard_layer": "bronze_silver",
3333
"database": "dlt_demo",
@@ -41,11 +41,12 @@ draft: false
4141
"overwrite": "True",
4242
"env": "dev"
4343
}
44-
```
45-
Alternatly you can enter keyword arguments, click + Add and enter a key and value. Click + Add again to enter more arguments.
44+
```
45+
46+
Alternatly you can enter keyword arguments, click + Add and enter a key and value. Click + Add again to enter more arguments.
4647

4748
10. Click Save task.
4849

4950
11. Run now
5051

51-
12. Make sure job run successfully. Verify metadata in your dataflow spec tables entered in step: 9 e.g ```dlt_demo.bronze_dataflowspec_table``` , ```dlt_demo.silver_dataflowspec_table```
52+
12. Make sure job run successfully. Verify metadata in your dataflow spec tables entered in step: 11 e.g ```dlt_demo.bronze_dataflowspec_table``` , ```dlt_demo.silver_dataflowspec_table```

docs/content/getting_started/runoboardingopt2.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,9 @@ draft: false
77

88
#### Option#2: Notebook
99
1. Copy below code to databricks notebook cells
10-
```
11-
%pip install dlt-meta
12-
```
13-
```
10+
```%pip install dlt-meta```
11+
12+
```python
1413
onboarding_params_map = {
1514
"database": "dlt_demo",
1615
"onboarding_file_path": "dbfs:/onboarding_files/users_onboarding.json",
@@ -23,12 +22,12 @@ onboarding_params_map = {
2322
"env": "dev",
2423
"version": "v1",
2524
"import_author": "Ravi"
26-
}
25+
}
2726

2827
from src.onboard_dataflowspec import OnboardDataflowspec
2928
OnboardDataflowspec(spark, onboarding_params_map).onboard_dataflow_specs()
30-
3129
```
30+
3231
2. Specify your onboarding config params in above ```onboarding_params_map```
3332

3433
3. Run notebook cells

tests/test_pipeline_readers.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,7 @@ def test_read_delta_with_read_config_positive(self):
163163
bronze_map.update(reader_config)
164164
bronze_dataflow_spec = BronzeDataflowSpec(**bronze_map)
165165
customer_df = PipelineReaders.read_dlt_delta(self.spark, bronze_dataflow_spec)
166-
self.assertIsNotNone(customer_df)
166+
self.assertIsNotNone(customer_df)
167167

168168
@patch.object(PipelineReaders, "get_db_utils", return_value=dbutils)
169169
@patch.object(dbutils, "secrets.get", return_value={"called"})

0 commit comments

Comments
 (0)