You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Pitt-Google broker is an astronomical alert broker that is being developed for large scale surveys of the night sky, particularly the upcoming [Vera Rubin Observatory's Legacy Survey of Space and Time](https://www.lsst.org/) (LSST).
4
8
We currently process and serve the [Zwicky Transient Facility](https://www.ztf.caltech.edu/)'s (ZTF) nightly alert stream.
5
-
The broker runs on the Google Cloud Platform ([GCP](https://cloud.google.com)).
6
-
7
-
---
8
-
9
-
## Access the Data
10
-
11
-
See [Pitt-Google-Tutorial-Code-Samples.ipynb](https://github.com/mwvgroup/Pitt-Google-Broker/blob/master/pgb_utils/tutorials/Pitt-Google-Tutorial-Code-Samples.ipynb) for a tutorial.
12
-
Data can be accessed using Google's [Cloud SDK](https://cloud.google.com/sdk) (Python, command-line, etc.).
13
-
In addition, we offer the Python package `pgb_utils` which contains wrappers of Cloud SDK methods and other helper functions to facilitate common use cases.
14
-
See the tutorials for details.
15
-
16
-
If you run into issues or need assistance, please open an Issue on GitHub or contact troy.raen@pitt.edu.
9
+
The broker runs on the [Google Cloud Platform](https://cloud.google.com) (GCP).
17
10
18
-
---
11
+
Documentation is at [pitt-broker.readthedocs.io](https://pitt-broker.readthedocs.io/).
19
12
20
-
## Run the Alert Broker
21
-
22
-
See [broker/README.md](broker/README.md) for information about the alert broker software and instructions on running it. The broker will connect to a survey alert stream (e.g., ZTF) and process \& redistribute the data. Those looking to __access__ the data do not need to run the broker; instead see [Access the Data](#access-the-data)
23
-
24
-
<!-- Full online documentation is available online via [Read the Docs](https://pitt-broker.readthedocs.io/en/latest/index.html). -->
Copy file name to clipboardExpand all lines: broker/broker_utils/broker_utils/consumer_sim.py
+12-8Lines changed: 12 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,8 @@ def publish_stream(
21
21
publish_batch_every: Tuple[int,str] = (5,'sec'),
22
22
sub_id: Optional[str] =None,
23
23
topic_id: Optional[str] =None,
24
-
nack: bool=False
24
+
nack: bool=False,
25
+
auto_confirm: bool=False
25
26
):
26
27
"""Pulls messages from from a Pub/Sub subscription determined by either
27
28
`instance` or `sub_id`, and publishes them to a topic determined by either
@@ -54,6 +55,8 @@ def publish_stream(
54
55
messages are published to the topic, but they are not
55
56
dropped from the subscription and so will be delivered again
56
57
at an arbitrary time in the future.
58
+
59
+
auto_confirm: Whether to automatically answer "Y" to the confirmation prompt.
57
60
"""
58
61
59
62
pbeN, pbeU=publish_batch_every# shorthand
@@ -69,10 +72,10 @@ def publish_stream(
69
72
print(f"\nPublishing:\n\t{Nbatches} batches\n\teach with {alerts_per_batch} alerts\n\tat a rate of 1 batch per {pbeN}{pbeU} (plus processing time)\n\tfor a total of {Nbatches*alerts_per_batch} alerts")
The files in this directory contain mappings between the schema of an individual survey and a PGB-standardized schema that is used within the broker code.
4
+
5
+
Note: This directory is __not__ packaged with the `broker_utils` module.
6
+
In order to allow broker instances to use unique schema maps, independent of other instances, each instance uploads this directory to its [`broker_files`] Cloud Storage bucket upon setup.
7
+
The broker code loads the schema maps from the bucket of the appropriate instance at runtime.
This Cloud Function checks whether the broker responded appropriately to the auto-scheduler's cue.
4
+
It does this by first pausing to allow time for the response, and then checking each broker component, such as VMs and Dataflow jobs.
5
+
If a component is found to be in an unexpected state, a "Critical" error is raised which triggers a GCP alerting policy.
6
+
7
+
This Cloud Function is triggered by the auto-scheduler's Pub/Sub topic. For reference, the auto-scheduling process looks like this (see [Auto-scheduler](auto-scheduler.md)):
8
+
9
+
Cloud Scheduler cron job -> Pub/Sub -> Cloud Function -> Night Conductor VM startup
10
+
11
+
12
+
## Alerting policy
13
+
14
+
An alerting policy was created manually to notify Troy Raen of anything written to the log named `check-cue-response-cloudfnc` that has severity `'CRITICAL'`.
15
+
Every broker instance has a unique `check_cue_response` Cloud Function, but they all write to the same log.
16
+
Therefore, a new policy does not need to be created with each new broker instance.
17
+
(Also, recall that the auto-scheduler is typically only active in Production instances.)
18
+
19
+
To update the existing policy, or create a new one, see:
0 commit comments