Skip to content

Commit 02909f7

Browse files
authored
Merge pull request #108061 from craigcaseyMSFT/vcraic0315
fix broken links from CATS report
2 parents 979aeb5 + 5428e8c commit 02909f7

File tree

10 files changed

+36
-36
lines changed

10 files changed

+36
-36
lines changed

articles/active-directory/fundamentals/whats-new.md

Lines changed: 6 additions & 6 deletions
Large diffs are not rendered by default.

articles/active-directory/saas-apps/rightanswers-tutorial.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -51,19 +51,19 @@ To configure the integration of RightAnswers into Azure AD, you need to add Righ
5151

5252
1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
5353

54-
![The Azure Active Directory button](common/select-azuread.png)
54+
![The Azure Active Directory button](common/select-azuread.png)
5555

5656
2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
5757

58-
![The Enterprise applications blade](common/enterprise-applications.png)
58+
![The Enterprise applications blade](common/enterprise-applications.png)
5959

6060
3. To add new application, click **New application** button on the top of dialog.
6161

62-
![The New application button](common/add-new-app.png)
62+
![The New application button](common/add-new-app.png)
6363

6464
4. In the search box, type **RightAnswers**, select **RightAnswers** from result panel then click **Add** button to add the application.
6565

66-
![RightAnswers in the results list](common/search-new-app.png)
66+
![RightAnswers in the results list](common/search-new-app.png)
6767

6868
## Configure and test Azure AD single sign-on
6969

@@ -95,38 +95,38 @@ To configure Azure AD single sign-on with RightAnswers, perform the following st
9595

9696
3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
9797

98-
![Edit Basic SAML Configuration](common/edit-urls.png)
98+
![Edit Basic SAML Configuration](common/edit-urls.png)
9999

100100
4. On the **Basic SAML Configuration** section, perform the following steps:
101101

102102
![RightAnswers Domain and URLs single sign-on information](common/sp-identifier.png)
103103

104-
a. In the **Sign on URL** text box, type a URL using the following pattern:
104+
a. In the **Sign on URL** text box, type a URL using the following pattern:
105105
`https://<subdomain>.rightanswers.com/portal/ss/`
106106

107107
b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
108108
`https://<subdomain>.rightanswers.com:<identifier>/portal`
109109

110-
> [!NOTE]
111-
> These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [RightAnswers Client support team](https://support.rightanswers.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
110+
> [!NOTE]
111+
> These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [RightAnswers Client support team](https://uplandsoftware.com/rightanswers/contact/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
112112
113113
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
114114

115-
![The Certificate download link](common/metadataxml.png)
115+
![The Certificate download link](common/metadataxml.png)
116116

117117
6. On the **Set up RightAnswers** section, copy the appropriate URL(s) as per your requirement.
118118

119-
![Copy configuration URLs](common/copy-configuration-urls.png)
119+
![Copy configuration URLs](common/copy-configuration-urls.png)
120120

121-
a. Login URL
121+
a. Login URL
122122

123-
b. Azure AD Identifier
123+
b. Azure AD Identifier
124124

125-
c. Logout URL
125+
c. Logout URL
126126

127127
### Configure RightAnswers Single Sign-On
128128

129-
To configure single sign-on on **RightAnswers** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [RightAnswers support team](https://support.rightanswers.com). They set this setting to have the SAML SSO connection set properly on both sides.
129+
To configure single sign-on on **RightAnswers** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [RightAnswers support team](https://uplandsoftware.com/rightanswers/contact/). They set this setting to have the SAML SSO connection set properly on both sides.
130130

131131
> [!NOTE]
132132
> Your RightAnswers support team has to do the actual SSO configuration. You will get a notification when SSO has been enabled for your subscription.
@@ -162,11 +162,11 @@ In this section, you enable Britta Simon to use Azure single sign-on by granting
162162

163163
1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **RightAnswers**.
164164

165-
![Enterprise applications blade](common/enterprise-applications.png)
165+
![Enterprise applications blade](common/enterprise-applications.png)
166166

167167
2. In the applications list, select **RightAnswers**.
168168

169-
![The RightAnswers link in the Applications list](common/all-applications.png)
169+
![The RightAnswers link in the Applications list](common/all-applications.png)
170170

171171
3. In the menu on the left, select **Users and groups**.
172172

articles/cosmos-db/spark-connector.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ You can use the connector with [Azure Databricks](https://azure.microsoft.com/se
3030
## Quickstart
3131

3232
* Follow the steps at [Get started with the Java SDK](sql-api-async-java-get-started.md) to set up a Cosmos DB account, and populate some data.
33-
* Follow the steps at [Azure Databricks getting started](https://docs.azuredatabricks.net/getting-started/index.html) to set up an Azure Databricks workspace and cluster.
33+
* Follow the steps at [Azure Databricks getting started](/azure/azure-databricks/quickstart-create-databricks-workspace-portal) to set up an Azure Databricks workspace and cluster.
3434
* You can now create new Notebooks, and import the Cosmos DB connector library. Jump to [Working with the Cosmos DB connector](#bk_working_with_connector) for details on how to set up your workspace.
3535
* The following section has snippets on how to read and write using the connector.
3636

articles/expressroute/expressroute-locations.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ The tables in this article provide information on ExpressRoute geographical cove
2929
Azure regions are global datacenters where Azure compute, networking and storage resources are located. When creating an Azure resource, a customer needs to select a resource location. The resource location determines which Azure datacenter (or availability zone) the resource is created in.
3030

3131
## ExpressRoute locations
32-
ExpressRoute locations (sometimes referred to as peering locations or meet-me-locations) are co-location facilities where Microsoft Enterprise Edge (MSEE) devices are located. ExpressRoute locations are the entry point to Microsofts network – and are globally distributed, providing customers the opportunity to connect to Microsofts network around the world. These locations are where ExpressRoute partners and ExpressRoute Direct customers issue cross connections to Microsofts network. In general, the ExpressRoute location does not need to match the Azure region. For example, a customer can create an ExpressRoute circuit with the resource location *East US*, in the *Seattle* Peering location.
32+
ExpressRoute locations (sometimes referred to as peering locations or meet-me-locations) are co-location facilities where Microsoft Enterprise Edge (MSEE) devices are located. ExpressRoute locations are the entry point to Microsoft's network – and are globally distributed, providing customers the opportunity to connect to Microsoft's network around the world. These locations are where ExpressRoute partners and ExpressRoute Direct customers issue cross connections to Microsoft's network. In general, the ExpressRoute location does not need to match the Azure region. For example, a customer can create an ExpressRoute circuit with the resource location *East US*, in the *Seattle* Peering location.
3333

3434
You will have access to Azure services across all regions within a geopolitical region if you connected to at least one ExpressRoute location within the geopolitical region.
3535

@@ -236,7 +236,7 @@ If you are remote and don't have fiber connectivity or you want to explore other
236236
| **[Altice Business](https://golightpath.com/transport)** |Equinix |New York, Washington DC |
237237
| **[Arteria Networks Corporation](https://www.arteria-net.com/business/service/cloud/sca/)** |Equinix |Tokyo |
238238
| **[Axtel](https://alestra.mx/landing/expressrouteazure/)** |Equinix |Dallas|
239-
| **[Beanfield Metroconnect](https://www.beanfield.com/cloud-exchange/)** |Megaport |Toronto|
239+
| **[Beanfield Metroconnect](https://www.beanfield.com/business/cloud-connect)** |Megaport |Toronto|
240240
| **[Bezeq International Ltd.](https://www.bezeqint.net/english)** | euNetworks | London |
241241
| **[BICS](https://bics.com/bics-solutions-suite/cloud-connect/)** | Equinix | Amsterdam, Frankfurt, London, Singapore, Washington DC |
242242
| **[BroadBand Tower, Inc.](https://www.bbtower.co.jp/product-service/data-center/network/dcconnect-for-azure/)** | Equinix | Tokyo |

articles/machine-learning/team-data-science-process/execute-data-science-tasks.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -57,9 +57,9 @@ A YAML file is used to specify:
5757
- what portion of the data is used for training and what portion for testing
5858
- which algorithms to run
5959
- the choice of control parameters for model optimization:
60-
- cross-validation
61-
- bootstrapping
62-
- folds of cross-validation
60+
- cross-validation
61+
- bootstrapping
62+
- folds of cross-validation
6363
- the hyper-parameter sets for each algorithm.
6464

6565
The number of algorithms, the number of folds for optimization, the hyper-parameters, and the number of hyper-parameter sets to sweep over can also be modified in the Yaml file to run the models quickly. For example, they can be run with a lower number of CV folds, a lower number of parameter sets. If it is warranted, they can also be run more comprehensively with a higher number of CV folds or a larger number of parameter sets.
@@ -70,7 +70,7 @@ For more information, see [Automated Modeling and Reporting Utility in TDSP Data
7070
After multiple models have been built, you usually need to have a system for registering and managing the models. Typically you need a combination of scripts or APIs and a backend database or versioning system. A few options that you can consider for these management tasks are:
7171

7272
1. [Azure Machine Learning - model management service](../index.yml)
73-
2. [ModelDB from MIT](https://mitdbg.github.io/modeldb/)
73+
2. [ModelDB from MIT](http://modeldb.csail.mit.edu:3000/projects)
7474
3. [SQL-server as a model management system](https://blogs.technet.microsoft.com/dataplatforminsider/2016/10/17/sql-server-as-a-machine-learning-model-management-system/)
7575
4. [Microsoft Machine Learning Server](https://docs.microsoft.com/sql/advanced-analytics/r/r-server-standalone)
7676

articles/openshift/tutorial-create-cluster.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -220,7 +220,7 @@ In the OpenShift console, click the question mark in the upper right corner by y
220220
>
221221
> Alternately, you can [download the oc CLI](https://www.okd.io/download.html) directly.
222222
223-
The **Command Line Tools** page provides a command of the form `oc login https://<your cluster name>.<azure region>.cloudapp.azure.com --token=<token value>`. Click the *Copy to clipboard* button to copy this command. In a terminal window, [set your path](https://docs.okd.io/latest/cli_reference/get_started_cli.html#installing-the-cli) to include your local installation of the oc tools. Then sign in to the cluster using the oc CLI command you copied.
223+
The **Command Line Tools** page provides a command of the form `oc login https://<your cluster name>.<azure region>.cloudapp.azure.com --token=<token value>`. Click the *Copy to clipboard* button to copy this command. In a terminal window, [set your path](https://docs.okd.io/latest/cli_reference/openshift_cli/getting-started-cli.html#installing-the-cli) to include your local installation of the oc tools. Then sign in to the cluster using the oc CLI command you copied.
224224

225225
If you couldn't get the token value using the steps above, get the token value from: `https://<your cluster name>.<azure region>.cloudapp.azure.com/oauth/token/request`.
226226

articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -95,8 +95,8 @@ This table compares the capabilities of Gen1 to that of Gen2.
9595
|Authentication|[AAD managed identity](../../active-directory/managed-identities-azure-resources/overview.md)<br>[Service principals](../../active-directory/develop/app-objects-and-service-principals.md)|[AAD managed identity](../../active-directory/managed-identities-azure-resources/overview.md)<br>[Service principals](../../active-directory/develop/app-objects-and-service-principals.md)<br>[Shared Access Key](https://docs.microsoft.com/rest/api/storageservices/authorize-with-shared-key)|
9696
|Authorization|Management - [RBAC](../../role-based-access-control/overview.md)<br>Data – [ACLs](data-lake-storage-access-control.md)|Management – [RBAC](../../role-based-access-control/overview.md)<br>Data - [ACLs](data-lake-storage-access-control.md), [RBAC](../../role-based-access-control/overview.md) |
9797
|Encryption – Data at rest|Server side – with [Microsoft-managed](../common/storage-service-encryption.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [customer-managed](../common/encryption-customer-managed-keys.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) keys|Server side – with [Microsoft-managed](../common/storage-service-encryption.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or [customer-managed](../common/encryption-customer-managed-keys.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) keys|
98-
|VNET Support|[VNET Integration](../../data-lake-store/data-lake-store-network-security.md)|[Service Endpoints](../common/storage-network-security.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json), [Private Endpoints](../common/storage-private-endpoints.md)|
99-
|Developer experience|[REST](../../data-lake-store/data-lake-store-data-operations-rest-api.md), [.NET](../../data-lake-store/data-lake-store-data-operations-net-sdk.md), [Java](../../data-lake-store/data-lake-store-get-started-java-sdk.md), [Python](../../data-lake-store/data-lake-store-data-operations-python.md), [PowerShell](../../data-lake-store/data-lake-store-get-started-powershell.md), [Azure CLI](../../data-lake-store/data-lake-store-get-started-cli-2.0.md)|[REST](/rest/api/storageservices/data-lake-storage-gen2), [.NET](/data-lake-storage-directory-file-acl-dotnet.md), [Java](data-lake-storage-directory-file-acl-java.md), [Python](data-lake-storage-directory-file-acl-python.md), [JavaScript](data-lake-storage-directory-file-acl-javascript.md), [PowerShell](data-lake-storage-directory-file-acl-powershell.md), [Azure CLI](data-lake-storage-directory-file-acl-cli.md) (In public preview)|
98+
|VNET Support|[VNET Integration](../../data-lake-store/data-lake-store-network-security.md)|[Service Endpoints](../common/storage-network-security.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json), [Private Endpoints (public preview)](../common/storage-private-endpoints.md)|
99+
|Developer experience|[REST](../../data-lake-store/data-lake-store-data-operations-rest-api.md), [.NET](../../data-lake-store/data-lake-store-data-operations-net-sdk.md), [Java](../../data-lake-store/data-lake-store-get-started-java-sdk.md), [Python](../../data-lake-store/data-lake-store-data-operations-python.md), [PowerShell](../../data-lake-store/data-lake-store-get-started-powershell.md), [Azure CLI](../../data-lake-store/data-lake-store-get-started-cli-2.0.md)|[REST](/rest/api/storageservices/data-lake-storage-gen2), [.NET](data-lake-storage-directory-file-acl-dotnet.md), [Java](data-lake-storage-directory-file-acl-java.md), [Python](data-lake-storage-directory-file-acl-python.md), [JavaScript](data-lake-storage-directory-file-acl-javascript.md), [PowerShell](data-lake-storage-directory-file-acl-powershell.md), [Azure CLI](data-lake-storage-directory-file-acl-cli.md) (In public preview)|
100100
|Diagnostic logs|Classic logs<br>[Azure Monitor integrated](../../data-lake-store/data-lake-store-diagnostic-logs.md)|[Classic logs](../common/storage-analytics-logging.md) (In public preview)<br>Azure monitor integration – timeline TBD|
101101
|Ecosystem|[HDInsight (3.6)](../../data-lake-store/data-lake-store-hdinsight-hadoop-use-portal.md), [Azure Databricks (3.1 and above)](https://docs.databricks.com/data/data-sources/azure/azure-datalake.html), [SQL DW](https://docs.microsoft.com/azure/sql-data-warehouse/sql-data-warehouse-load-from-azure-data-lake-store), [ADF](../../data-factory/load-azure-data-lake-store.md)|[HDInsight (3.6, 4.0)](../../hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md), [Azure Databricks (5.1 and above)](https://docs.microsoft.com/azure/databricks/data/data-sources/azure/azure-datalake-gen2), [SQL DW](../../sql-database/sql-database-vnet-service-endpoint-rule-overview.md), [ADF](../../data-factory/load-azure-data-lake-storage-gen2.md)|
102102

articles/storage/common/storage-network-security.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -388,7 +388,7 @@ The **Allow trusted Microsoft services...** setting also allows a particular ins
388388
| Azure Container Registry Tasks | Microsoft.ContainerRegistry/registries | ACR Tasks can access storage accounts when building container images. |
389389
| Azure Data Factory | Microsoft.DataFactory/factories | Allows access to storage accounts through the ADF runtime. |
390390
| Azure Data Share | Microsoft.DataShare/accounts | Allows access to storage accounts through Data Share. |
391-
| Azure Logic Apps | Microsoft.Logic/workflows | Enables logic apps to access storage accounts. [Learn more](/azure/logic-apps/create-managed-service-identity.md#authenticate-access-with-managed-identity). |
391+
| Azure Logic Apps | Microsoft.Logic/workflows | Enables logic apps to access storage accounts. [Learn more](/azure/logic-apps/create-managed-service-identity#authenticate-access-with-managed-identity). |
392392
| Azure Machine Learning Service | Microsoft.MachineLearningServices | Authorized Azure Machine Learning workspaces write experiment output, models, and logs to Blob storage and read the data. [Learn more](/azure/machine-learning/service/how-to-enable-virtual-network#use-a-storage-account-for-your-workspace). |
393393
| Azure SQL Data Warehouse | Microsoft.Sql | Allows import and export of data from specific SQL Database instances using PolyBase. [Learn more](/azure/sql-database/sql-database-vnet-service-endpoint-rule-overview). |
394394
| Azure Stream Analytics | Microsoft.StreamAnalytics | Allows data from a streaming job to be written to Blob storage. This feature is currently in preview. [Learn more](/azure/stream-analytics/blob-output-managed-identity). |

articles/virtual-machines/h-series.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ The Azure Marketplace has many Linux distributions that support RDMA connectivit
8080

8181
[!INCLUDE [virtual-machines-common-ubuntu-rdma](../../includes/virtual-machines-common-ubuntu-rdma.md)]
8282

83-
For more details on enabling InfiniBand, setting up MPI, see [Enable InfiniBand](/workloads/hpc/enable-infiniband.md).
83+
For more details on enabling InfiniBand, setting up MPI, see [Enable InfiniBand](./workloads/hpc/enable-infiniband.md).
8484

8585
## Other sizes
8686

articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -470,9 +470,9 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
470470

471471
When using drbd to synchronize data from one host to another, a so called split brain can occur. A split brain is a scenario where both cluster nodes promoted the drbd device to be the primary and went out of sync. It might be a rare situation but you still want to handle and resolve a split brain as fast as possible. It is therefore important to be notified when a split brain happened.
472472

473-
Read [the official drbd documentation](https://docs.linbit.com/doc/users-guide-83/s-configure-split-brain-behavior/#s-split-brain-notification) on how to set up a split brain notification.
473+
Read [the official drbd documentation](https://www.linbit.com/drbd-user-guide/users-guide-drbd-8-4/#s-split-brain-notification) on how to set up a split brain notification.
474474

475-
It is also possible to automatically recover from a split brain scenario. For more information, read [Automatic split brain recovery policies](https://docs.linbit.com/doc/users-guide-83/s-configure-split-brain-behavior/#s-automatic-split-brain-recovery-configuration)
475+
It is also possible to automatically recover from a split brain scenario. For more information, read [Automatic split brain recovery policies](https://www.linbit.com/drbd-user-guide/users-guide-drbd-8-4/#s-automatic-split-brain-recovery-configuration)
476476

477477
### Configure Cluster Framework
478478

0 commit comments

Comments
 (0)