Skip to content

Commit b3484f8

Browse files
authored
Merge pull request #50809 from JasonWHowell/quotes2
Replacing smart quotes with regular quotes
2 parents b212c6b + 8fa57d0 commit b3484f8

21 files changed

+39
-39
lines changed

articles/hdinsight/hadoop/apache-hadoop-dotnet-avro-serialization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ The <a href="https://hadoopsdk.codeplex.com/wikipage?title=Avro%20Library" targe
3232
The serialized representation of an object in the Avro system consists of two parts: schema and actual value. The Avro schema describes the language-independent data model of the serialized data with JSON. It is presented side by side with a binary representation of data. Having the schema separate from the binary representation permits each object to be written with no per-value overheads, making serialization fast, and the representation small.
3333

3434
## The Hadoop scenario
35-
The Apache Avro serialization format is widely used in Azure HDInsight and other Apache Hadoop environments. Avro provides a convenient way to represent complex data structures within a Hadoop MapReduce job. The format of Avro files (Avro object container file) has been designed to support the distributed MapReduce programming model. The key feature that enables the distribution is that the files are splittable in the sense that one can seek any point in a file and start reading from a particular block.
35+
The Apache Avro serialization format is widely used in Azure HDInsight and other Apache Hadoop environments. Avro provides a convenient way to represent complex data structures within a Hadoop MapReduce job. The format of Avro files (Avro object container file) has been designed to support the distributed MapReduce programming model. The key feature that enables the distribution is that the files are "splittable" in the sense that one can seek any point in a file and start reading from a particular block.
3636

3737
## Serialization in Avro Library
3838
The .NET Library for Avro supports two ways of serializing objects:

articles/hdinsight/hdinsight-for-vscode.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -270,7 +270,7 @@ HDInsight Tools for VS Code also enables you to submit interactive PySpark queri
270270
After you submit a Python job, submission logs appear in the **OUTPUT** window in VS Code. The **Spark UI URL** and **Yarn UI URL** are shown as well. You can open the URL in a web browser to track the job status.
271271

272272
>[!NOTE]
273-
>PySpark3 is not supported anymore in Livy 0.4 (which is HDI spark 2.2 cluster). Only PySpark is supported for python. It is known issue that submit to spark 2.2 fail with python3.
273+
>PySpark3 is not supported anymore in Livy 0.4 (which is HDI spark 2.2 cluster). Only "PySpark" is supported for python. It is known issue that submit to spark 2.2 fail with python3.
274274

275275
## Livy configuration
276276
Livy configuration is supported, it could be set at the project settings in work space folder. More details, see [Livy README](https://github.com/cloudera/livy/blob/master/README.rst ).

articles/hdinsight/hdinsight-hadoop-r-scaler-sparkr.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -326,7 +326,7 @@ joinedDF5 <- rename(joinedDF4,
326326

327327
## Save results to CSV for exchange with ScaleR
328328

329-
That completes the joins we need to do with SparkR. We save the data from the final Spark DataFrame joinedDF5 to a CSV for input to ScaleR and then close out the SparkR session. We explicitly tell SparkR to save the resultant CSV in 80 separate partitions to enable sufficient parallelism in ScaleR processing:
329+
That completes the joins we need to do with SparkR. We save the data from the final Spark DataFrame "joinedDF5" to a CSV for input to ScaleR and then close out the SparkR session. We explicitly tell SparkR to save the resultant CSV in 80 separate partitions to enable sufficient parallelism in ScaleR processing:
330330

331331
```
332332
logmsg('output the joined data from Spark to CSV')

articles/hdinsight/r-server/r-server-hdinsight-manage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ The following screenshot shows the outputs.
6262

6363
![Concurrent user 3](./media/r-server-hdinsight-manage/concurrent-users-2.png)
6464

65-
When prompted for Current Kerberos password:, just press **Enter** to ignore it. The `-m` option in `useradd` command indicates that the system will create a home folder for the user, which is required for RStudio Community version.
65+
When prompted for "Current Kerberos password:", just press **Enter** to ignore it. The `-m` option in `useradd` command indicates that the system will create a home folder for the user, which is required for RStudio Community version.
6666

6767
### Step 3: Use RStudio Community version with the user created
6868

articles/hdinsight/spark/apache-spark-jupyter-notebook-kernels.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ Here are a few benefits of using the new kernels with Jupyter notebook on Spark
6666

6767
Instead, you can directly use the preset contexts in your application.
6868

69-
- **Cell magics**. The PySpark kernel provides some predefined magics, which are special commands that you can call with `%%` (for example, `%%MAGIC` <args>). The magic command must be the first word in a code cell and allow for multiple lines of content. The magic word should be the first word in the cell. Adding anything before the magic, even comments, causes an error. For more information on magics, see [here](http://ipython.readthedocs.org/en/stable/interactive/magics.html).
69+
- **Cell magics**. The PySpark kernel provides some predefined "magics", which are special commands that you can call with `%%` (for example, `%%MAGIC` <args>). The magic command must be the first word in a code cell and allow for multiple lines of content. The magic word should be the first word in the cell. Adding anything before the magic, even comments, causes an error. For more information on magics, see [here](http://ipython.readthedocs.org/en/stable/interactive/magics.html).
7070

7171
The following table lists the different magics available through the kernels.
7272

articles/stream-analytics/stream-analytics-build-an-iot-solution-using-stream-analytics.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ After completing this solution, you are able to:
2828
You need the following prerequisites to complete this solution:
2929
* An [Azure subscription](https://azure.microsoft.com/pricing/free-trial/)
3030

31-
## Scenario introduction: Hello, Toll!
31+
## Scenario introduction: "Hello, Toll!"
3232
A toll station is a common phenomenon. You encounter them on many expressways, bridges, and tunnels across the world. Each toll station has multiple toll booths. At manual booths, you stop to pay the toll to an attendant. At automated booths, a sensor on top of each booth scans an RFID card that's affixed to the windshield of your vehicle as you pass the toll booth. It is easy to visualize the passage of vehicles through these toll stations as an event stream over which interesting operations can be performed.
3333

3434
![Picture of cars at toll booths](media/stream-analytics-build-an-iot-solution-using-stream-analytics/image1.jpg)
@@ -106,7 +106,7 @@ Here is a short description of the columns:
106106
## Set up the environment for Azure Stream Analytics
107107
To complete this solution, you need a Microsoft Azure subscription. If you do not have an Azure account, you can [request a free trial version](http://azure.microsoft.com/pricing/free-trial/).
108108

109-
Be sure to follow the steps in the Clean up your Azure account section at the end of this article so that you can make the best use of your Azure credit.
109+
Be sure to follow the steps in the "Clean up your Azure account" section at the end of this article so that you can make the best use of your Azure credit.
110110

111111
## Deploy the sample
112112
There are several resources that can easily be deployed in a resource group together with a few clicks. The solution definition is hosted in github repository at [https://github.com/Azure/azure-stream-analytics/tree/master/Samples/TollApp](https://github.com/Azure/azure-stream-analytics/tree/master/Samples/TollApp).

articles/stream-analytics/stream-analytics-common-troubleshooting-issues.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ ms.date: 04/12/2018
2121

2222
![Inputs tile](media/stream-analytics-malformed-events/inputs_tile.png)
2323

24-
To see more information, enable the diagnostics logs to view the details of the warning. For malformed input events, the execution logs contain an entry with the message that looks like: Message: Could not deserialize the input event(s) from resource <blob URI> as json".
24+
To see more information, enable the diagnostics logs to view the details of the warning. For malformed input events, the execution logs contain an entry with the message that looks like: "Message: Could not deserialize the input event(s) from resource <blob URI> as json".
2525

2626
### Troubleshooting steps
2727

articles/stream-analytics/stream-analytics-compatibility-level.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Compatibility level makes sure that existing jobs run without any failure. When
1818

1919
## Set a compatibility level
2020

21-
Compatibility level controls the runtime behavior of a stream analytics job. You can set the compatibility level for a Stream Analytics job by using portal or by using the [create job REST API call](https://docs.microsoft.com/rest/api/streamanalytics/stream-analytics-job). Azure Stream Analytics currently supports two compatibility levels- 1.0 and 1.1. By default, the compatibility level is set to 1.0 which was introduced during general availability of Azure Stream Analytics. To update the default value, navigate to your existing Stream Analytics job > select the **Compatibility Level** option in **Configure** section and change the value.
21+
Compatibility level controls the runtime behavior of a stream analytics job. You can set the compatibility level for a Stream Analytics job by using portal or by using the [create job REST API call](https://docs.microsoft.com/rest/api/streamanalytics/stream-analytics-job). Azure Stream Analytics currently supports two compatibility levels- "1.0" and "1.1". By default, the compatibility level is set to "1.0" which was introduced during general availability of Azure Stream Analytics. To update the default value, navigate to your existing Stream Analytics job > select the **Compatibility Level** option in **Configure** section and change the value.
2222

2323
Make sure that you stop the job before updating the compatibility level. You can’t update the compatibility level if your job is in a running state.
2424

@@ -35,11 +35,11 @@ The following major changes are introduced in compatibility level 1.1:
3535

3636
* **previous versions:** Azure Stream Analytics used DataContractSerializer, so the message content included XML tags. For example:
3737

38-
@\u0006string\b3http://schemas.microsoft.com/2003/10/Serialization/\u0001{ SensorId”:”1”, “Temperature:64\}\u0001
38+
@\u0006string\b3http://schemas.microsoft.com/2003/10/Serialization/\u0001{ "SensorId":"1", "Temperature":64\}\u0001
3939

4040
* **current version:** The message content contains the stream directly with no additional tags. For example:
4141

42-
{ SensorId”:”1”, “Temperature:64}
42+
{ "SensorId":"1", "Temperature":64}
4343

4444
* **Persisting case-sensitivity for field names**
4545

articles/stream-analytics/stream-analytics-define-outputs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ Once you have the Power BI account authenticated, you can configure the properti
143143
| Property name | description |
144144
| --- | --- |
145145
| Output alias |A friendly name used in queries to direct the query output to this PowerBI output. |
146-
| Group workspace |To enable sharing data with other Power BI users you can select groups inside your Power BI account or choose My Workspace if you do not want to write to a group. Updating an existing group requires renewing the Power BI authentication. |
146+
| Group workspace |To enable sharing data with other Power BI users you can select groups inside your Power BI account or choose "My Workspace" if you do not want to write to a group. Updating an existing group requires renewing the Power BI authentication. |
147147
| Dataset name |Provide a dataset name that it is desired for the Power BI output to use |
148148
| Table name |Provide a table name under the dataset of the Power BI output. Currently, Power BI output from Stream Analytics jobs can only have one table in a dataset |
149149

@@ -167,7 +167,7 @@ bigint | Int64
167167
nvarchar(max) | String
168168
datetime | Datetime
169169
float | Double
170-
Record array | String type, Constant value IRecord or IArray
170+
Record array | String type, Constant value "IRecord" or "IArray"
171171

172172
### Schema Update
173173
Stream Analytics infers the data model schema based on the first set of events in the output. Later, if necessary, the data model schema is updated to accommodate incoming events that may not fit into the original schema.

articles/stream-analytics/stream-analytics-get-started-with-azure-stream-analytics-to-process-data-from-iot-devices.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ The simplest form of query is a pass-through query that archives all input data
8181
![Test results](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-07.png)
8282

8383
### Query: Filter the data based on a condition
84-
Let’s try to filter the results based on a condition. We would like to show results for only those events that come from sensorA. The query is in the Filtering.txt file.
84+
Let’s try to filter the results based on a condition. We would like to show results for only those events that come from "sensorA." The query is in the Filtering.txt file.
8585

8686
![Filtering a data stream](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-08.png)
8787

0 commit comments

Comments
 (0)