Skip to content

Commit 5050fbb

Browse files
authored
Propose fix some typos and add newline if needed (#147)
1 parent 0d5f4a4 commit 5050fbb

File tree

6 files changed

+14
-14
lines changed

6 files changed

+14
-14
lines changed

examples/notebooks/showcases/live-data-jupyter.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -459,7 +459,7 @@
459459
"source": [
460460
"## Jupyter Notebooks & Streaming Data in Production\n",
461461
"\n",
462-
"Congratulations! You have succesfully built a live data streaming pipeline with useful data visualisations and real-time alerts, right from a Jupyter notebook \ud83d\ude04\n",
462+
"Congratulations! You have successfully built a live data streaming pipeline with useful data visualisations and real-time alerts, right from a Jupyter notebook \ud83d\ude04\n",
463463
"\n",
464464
"This is just a taste of what is possible. If you're interested in diving deeper and building a production-grade data science pipeline all the way from data exploration to deployment, you may want to check out the full-length [From Jupyter to Deploy](/developers/user-guide/deployment/from-jupyter-to-deploy) tutorial.\n",
465465
"\n",
@@ -499,4 +499,4 @@
499499
},
500500
"nbformat": 4,
501501
"nbformat_minor": 5
502-
}
502+
}

examples/notebooks/showcases/live_vector_indexing_pipeline.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@
6666
"\n",
6767
"Pathway Vectorstore enables building a document index on top of your documents without the\n",
6868
"complexity of ETL pipelines, managing different containers for storing, embedding, and serving.\n",
69-
"It allows for easy to manage, always up-to-date, LLM pipelines accesible using a RESTful API\n",
69+
"It allows for easy to manage, always up-to-date, LLM pipelines accessible using a RESTful API\n",
7070
"and with integrations to popular LLM toolkits such as Langchain and LlamaIndex.\n",
7171
"\n",
7272
"\n",

examples/notebooks/showcases/mistral_adaptive_rag_question_answering.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -405,7 +405,7 @@
405405
"lines_to_next_cell": 2
406406
},
407407
"source": [
408-
"#### 4. Local LLM Deployement\n",
408+
"#### 4. Local LLM Deployment\n",
409409
"Due to its size and performance we decided to run the `Mistral 7B` Local Language Model. We deploy it as a service running on GPU, using `Ollama`.\n",
410410
"\n",
411411
"In order to run local LLM, refer to these steps:\n",
@@ -626,4 +626,4 @@
626626
},
627627
"nbformat": 4,
628628
"nbformat_minor": 5
629-
}
629+
}

examples/notebooks/tutorials/alert-deduplication.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -173,7 +173,7 @@
173173
"id": "7",
174174
"metadata": {},
175175
"source": [
176-
"To track the maximum value, we could write `input.groupby().reduce(max=pw.reducers.max(input.value))`. Here we want to keep track also *when* this maximum occured, therefore we use the `argmax_rows` utility function."
176+
"To track the maximum value, we could write `input.groupby().reduce(max=pw.reducers.max(input.value))`. Here we want to keep track also *when* this maximum occurred, therefore we use the `argmax_rows` utility function."
177177
]
178178
},
179179
{
@@ -242,7 +242,7 @@
242242
"id": "14",
243243
"metadata": {},
244244
"source": [
245-
"Now we can send the alerts to e.g. Slack. We can do it similarily as in the [realtime log monitoring tutorial](/developers/templates/etl/realtime-log-monitoring#scenario-2-sending-the-alert-to-slack) by using `pw.io.subscribe`.\n",
245+
"Now we can send the alerts to e.g. Slack. We can do it similarly as in the [realtime log monitoring tutorial](/developers/templates/etl/realtime-log-monitoring#scenario-2-sending-the-alert-to-slack) by using `pw.io.subscribe`.\n",
246246
"\n",
247247
"Here, for testing purposes, instead of sending an alert, we will store the accepted maxima in the list."
248248
]
@@ -279,7 +279,7 @@
279279
"id": "17",
280280
"metadata": {},
281281
"source": [
282-
"Let's run the program. Since the stream we defined is bounded (and we set high `input_rate` in the `generate_custom_stream`), the call to `pw.run` will finish quickly. Hovever, in most usecases, you will be streaming data (e.g. from kafka) indefinitely."
282+
"Let's run the program. Since the stream we defined is bounded (and we set high `input_rate` in the `generate_custom_stream`), the call to `pw.run` will finish quickly. However, in most usecases, you will be streaming data (e.g. from kafka) indefinitely."
283283
]
284284
},
285285
{
@@ -386,4 +386,4 @@
386386
},
387387
"nbformat": 4,
388388
"nbformat_minor": 5
389-
}
389+
}

examples/notebooks/tutorials/declarative_vs_imperative.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@
8181
"```\n",
8282
"we would expect three \"finished\" chunks: `(0,1,2)`, `(3,4,5,6)`, `(7,8)` and one unfinished chunk `(9,...)`.\n",
8383
"\n",
84-
"One way to do this would be imperative style: go through rows one-by-one in order storing current chunk in a state and emiting it whenever `flag` is equal to True, while clearing the state.\n",
84+
"One way to do this would be imperative style: go through rows one-by-one in order storing current chunk in a state and emitting it whenever `flag` is equal to True, while clearing the state.\n",
8585
"Even though, its not recommended approach, let's see how to code it in Pathway."
8686
]
8787
},
@@ -181,7 +181,7 @@
181181
"source": [
182182
"Instead of manually managing state and control flow, Pathway allows you to define such logic using declarative constructs like `sort`, `iterate`, `groupby`. The result is a clear and concise pipeline that emits chunks of event times splitting the flag, showcasing the power and readability of declarative data processing.\n",
183183
"\n",
184-
"In the following, we tell Pathway to propagate the starting time of each chunk across the rows. This is done by declaring a simple local rule: take the starting time of a chunk from previous row or use current event time. This rule is then iterated until fixed-point, so that the information is spreaded until all rows know the starting time of their chunk.\n",
184+
"In the following, we tell Pathway to propagate the starting time of each chunk across the rows. This is done by declaring a simple local rule: take the starting time of a chunk from previous row or use current event time. This rule is then iterated until fixed-point, so that the information is spread until all rows know the starting time of their chunk.\n",
185185
"\n",
186186
"Then we can just group rows by starting time of the chunk to get a table of chunks."
187187
]
@@ -389,4 +389,4 @@
389389
},
390390
"nbformat": 4,
391391
"nbformat_minor": 5
392-
}
392+
}

examples/notebooks/tutorials/rag-evaluations.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1613,7 +1613,7 @@
16131613
"\n",
16141614
"Always structure your responses in the following format:\n",
16151615
"Relevant contexts: [Write the relevant parts of the context for given question]\n",
1616-
"Answer: [Detailed reponse to the user's question that is grounded by the facts you listed]\n",
1616+
"Answer: [Detailed response to the user's question that is grounded by the facts you listed]\n",
16171617
"\n",
16181618
"If you don't know the answer, just say that you don't know.\n",
16191619
"\n",
@@ -1757,4 +1757,4 @@
17571757
},
17581758
"nbformat": 4,
17591759
"nbformat_minor": 5
1760-
}
1760+
}

0 commit comments

Comments
 (0)