Skip to content

Commit b794831

Browse files
committed
Bulk update: Global effort to fix validation errors
1 parent 5d54915 commit b794831

21 files changed

+33
-33
lines changed

articles/active-directory-b2c/customize-ui-with-html.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -264,7 +264,7 @@ Configure Blob storage for Cross-Origin Resource Sharing by performing the follo
264264
Validate that you're ready by performing the following steps:
265265

266266
1. Repeat the configure CORS step. For **Allowed origins**, enter `https://www.test-cors.org`
267-
1. Navigate to [www.test-cors.org](https://www.test-cors.org/)
267+
1. Navigate to `www.test-cors.org`
268268
1. For the **Remote URL** box, paste the URL of your HTML file. For example, `https://your-account.blob.core.windows.net/root/azure-ad-b2c/unified.html`
269269
1. Select **Send Request**.
270270
The result should be `XHR status: 200`.

articles/app-service/includes/deploy-intelligent-apps/deploy-intelligent-apps-linux-java-pivot.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ For OpenAI, use the following settings:
6565
| `OPENAI_API_KEY` | @Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/) |
6666
| `OPENAI_MODEL_NAME` | @Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/) |
6767

68-
Once your app settings are saved, you can access the app settings in your code by referencing them in your application. Add the following code in the *[Application.java](http://Application.java)* file:
68+
Once your app settings are saved, you can access the app settings in your code by referencing them in your application. Add the following code in the *Application.java `http://Application.java`* file:
6969

7070
For Azure OpenAI:
7171

@@ -131,7 +131,7 @@ OpenAIClient client = new OpenAIClientBuilder()
131131
.buildClient();
132132
```
133133

134-
Once added, you see the following imports are added to the [Application.java](http://Application.java) file:
134+
Once added, you see the following imports are added to the Application.java `http://Application.java` file:
135135

136136
```java
137137
import com.azure.ai.openai.OpenAIClient;

articles/app-service/includes/deploy-intelligent-apps/deploy-intelligent-apps-linux-python-pivot.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ For OpenAI, use the following:
118118
|-|-|-|
119119
| `OPENAI_API_KEY` | @Microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/) |
120120

121-
Once your app settings are saved, you can [access the app settings](https://www.notion.so/Creating-Intelligent-App-on-App-Service-Python-757641ec4eda4dde88c9cad02d542170?pvs=21) in your code by referencing them in your application. Add the following to the *[app.py](http://app.py) file:*
121+
Once your app settings are saved, you can access the app settings in your code by referencing them in your application. Add the following to the *app.py `http://app.py` file:*
122122

123123
For Azure OpenAI:
124124
```python
@@ -147,7 +147,7 @@ To install LangChain, navigate to your application using Command Line or PowerSh
147147
pip install langchain-openai
148148
```
149149

150-
Once the package is installed, you can import and use LangChain. Update the *[app.py](http://app.py)* file with the following code:
150+
Once the package is installed, you can import and use LangChain. Update the * app.py `http://app.py`* file with the following code:
151151

152152
```python
153153
import os
@@ -160,7 +160,7 @@ from langchain_openai import AzureOpenAI~~
160160

161161
```
162162

163-
After LangChain is imported into our file, you can add the code that will call to OpenAI with the LangChain invoke chat method. Update *[app.py](http://app.py)* to include the following code:
163+
After LangChain is imported into our file, you can add the code that will call to OpenAI with the LangChain invoke chat method. Update *app.py `http://app.py`* to include the following code:
164164

165165
For Azure OpenAI, use the following code:
166166

articles/app-service/troubleshoot-intermittent-outbound-connection-errors.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ Although PHP doesn't support connection pooling, you can try using persistent da
9898

9999
* Other data Sources
100100

101-
* [PHP Connection Management](https://www.php.net/manual/en/pdo.connections.php)
101+
* [PHP Connection Management](https://www.php.net/manual/pdo.connections.php)
102102

103103
#### Python
104104

articles/azure-vmware/configure-windows-server-failover-cluster.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ The following activities aren't supported and might cause WSFC node failover:
145145
## Related information
146146

147147
- [Failover Clustering in Windows Server](/windows-server/failover-clustering/failover-clustering-overview)
148-
- [Guidelines for Microsoft Clustering on vSphere (1037959) (vmware.com)](https://kb.vmware.com/s/article/1037959)
148+
- [Guidelines for Microsoft Clustering on vSphere (1037959) (vmware.com)](https://knowledge.broadcom.com/external/article?legacyId=1037959)
149149
- [About Setup for Failover Clustering and Microsoft Cluster Service (vmware.com)](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.mscs.doc/GUID-1A2476C0-CA66-4B80-B6F9-8421B6983808.html)
150150
- [vSAN 6.7 U3 - WSFC with Shared Disks & SCSI-3 Persistent Reservations (vmware.com)](https://blogs.vmware.com/virtualblocks/2019/08/23/vsan67-u3-wsfc-shared-disksupport/)
151151
- [Azure VMware Solution limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-vmware-solution-limits)

articles/confidential-computing/partner-pages/habu.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ ms.author: ananyagarg
1717

1818
Data clean rooms allow organizations to share data and collaborate on analytics without compromising privacy and security. Habu is a pioneer in data clean rooms, offering a fully interoperable cloud solution that allows multiple organizations to collaborate without moving data. They now provide clean rooms that support [Azure confidential computing](../overview.md) on [AMD powered confidential VMs](../confidential-vm-overview.md) to increase data privacy protection.
1919

20-
Collaboration partners can now participate in cross-cloud, cross-region data sharing - with protections against unauthorized access to data across partners, cloud providers, and even Habu. You can hear more from Habu’s Chief Product Officer, Matthew Karasick, on their [partnership with Azure here](https://build.microsoft.com/en-US/sessions/4cdcea58-d6fa-43f9-a1ea-27a8983e3f57?source=partnerdetail).
20+
Collaboration partners can now participate in cross-cloud, cross-region data sharing - with protections against unauthorized access to data across partners, cloud providers, and even Habu.
2121

2222
You can also get started on their [Azure Marketplace solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/habuinc1663874067667.habu?tab=Overview), today.
2323

articles/cosmos-db/global-dist-under-the-hood.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ The service allows you to configure your Azure Cosmos DB databases with either a
6464

6565
## Conflict resolution
6666

67-
Our design for the update propagation, conflict resolution, and causality tracking is inspired from the prior work on [epidemic algorithms](https://www.kth.se/social/upload/51647982f276546170461c46/4-gossip.pdf) and the [Bayou](https://people.cs.umass.edu/~mcorner/courses/691M/papers/terry.pdf) system. While the kernels of the ideas have survived and provide a convenient frame of reference for communicating the Azure Cosmos DB’s system design, they have also undergone significant transformation as we applied them to the Azure Cosmos DB system. This was needed, because the previous systems were designed neither with the resource governance nor with the scale at which Azure Cosmos DB needs to operate, nor to provide the capabilities (for example, bounded staleness consistency) and the stringent and comprehensive SLAs that Azure Cosmos DB delivers to its customers.
67+
Our design for the update propagation, conflict resolution, and causality tracking is inspired from the prior work on [epidemic algorithms](https://www.kth.se/social/upload/51647982f276546170461c46/4-gossip.pdf) and the Bayou system. While the kernels of the ideas have survived and provide a convenient frame of reference for communicating the Azure Cosmos DB’s system design, they have also undergone significant transformation as we applied them to the Azure Cosmos DB system. This was needed, because the previous systems were designed neither with the resource governance nor with the scale at which Azure Cosmos DB needs to operate, nor to provide the capabilities (for example, bounded staleness consistency) and the stringent and comprehensive SLAs that Azure Cosmos DB delivers to its customers.
6868

6969
Recall that a partition-set is distributed across multiple regions and follows Azure Cosmos DB s (multi-region writes) replication protocol to replicate the data among the physical partitions comprising a given partition-set. Each physical partition (of a partition-set) accepts writes and serves reads typically to the clients that are local to that region. Writes accepted by a physical partition within a region are durably committed and made highly available within the physical partition before they are acknowledged to the client. These are tentative writes and are propagated to other physical partitions within the partition-set using an anti-entropy channel. Clients can request either tentative or committed writes by passing a request header. The anti-entropy propagation (including the frequency of propagation) is dynamic, based on the topology of the partition-set, regional proximity of the physical partitions, and the consistency level configured. Within a partition-set, Azure Cosmos DB follows a primary commit scheme with a dynamically selected arbiter partition. The arbiter selection is dynamic and is an integral part of the reconfiguration of the partition-set based on the topology of the overlay. The committed writes (including multi-row/batched updates) are guaranteed to be ordered.
7070

articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -317,7 +317,7 @@ To view and reuse some samples of standard custom setups, complete the following
317317

318318
* A *POSTGRESQL ODBC* folder, which contains a custom setup script (*main.cmd*) to install the PostgreSQL ODBC drivers on each node of your Azure-SSIS IR. This setup lets you use the ODBC Connection Manager, Source, and Destination to connect to the PostgreSQL server.
319319

320-
First, [download the latest 64-bit and 32-bit versions of PostgreSQL ODBC driver installers](https://www.postgresql.org/ftp/odbc/versions/msi/) (for example, *psqlodbc_x64.msi* and *psqlodbc_x86.msi*), and then upload them all together with *main.cmd* to your blob container.
320+
First, [download the latest 64-bit and 32-bit versions of PostgreSQL ODBC driver installers](/sql/connect/odbc/download-odbc-driver-for-sql-serve) (for example, *psqlodbc_x64.msi* and *psqlodbc_x86.msi*), and then upload them all together with *main.cmd* to your blob container.
321321

322322
* A *SAP BW* folder, which contains a custom setup script (*main.cmd*) to install the SAP .NET connector assembly (*librfc32.dll*) on each node of your Azure-SSIS IR Enterprise Edition. This setup lets you use the SAP BW Connection Manager, Source, and Destination to connect to the SAP BW server.
323323

articles/iot-dps/iot-dps-https-sym-key-support.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -399,7 +399,7 @@ To learn more about creating SAS tokens for IoT Hub, including example code in o
399399
400400
### Send data to your IoT hub
401401
402-
You call the IoT Hub [Send Device Event](/rest/api/iothub/device/send-device-event) REST API to send telemetry to the device.
402+
You call the IoT Hub Send Device Event REST API to send telemetry to the device.
403403
404404
Use the following curl command:
405405

articles/iot-dps/iot-dps-https-x509-support.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -386,7 +386,7 @@ Note down the device ID and the assigned IoT hub. You'll use them to send a tele
386386
387387
## Send a telemetry message
388388
389-
You call the IoT Hub [Send Device Event](/rest/api/iothub/device/send-device-event) REST API to send telemetry to the device.
389+
You call the IoT Hub Send Device Event REST API to send telemetry to the device.
390390
391391
Use the following curl command:
392392

0 commit comments

Comments
 (0)