You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+22Lines changed: 22 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -88,6 +88,28 @@ In order to create it, follow these steps:
88
88
### Creating a bucket on Cloud Storage
89
89
This bucket will hold the deployed code for this solution. To create it, navigate to the *Storage* link on the top-left menu on GCP and click on *Create bucket*. You can use Regional location and Standard data type for this bucket.
90
90
91
+
## Running Megalista
92
+
93
+
We recommend first running it locally and make sure that everything works.
94
+
Make some sample tables on BigQuery for one of the uploaders and make sure that the data is getting correctly to the destination.
95
+
After that is done, upload the Dataflow template to GCP and try running it manually via the UI to make sure it works.
96
+
Lastly, configure the Cloud Scheduler to run Megalista in the frequency desired and you'll have a fully functional data integration pipeline.
0 commit comments