Skip to content

Commit 68f5a19

Browse files
authored
Merge pull request #95866 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents 4239066 + fea3840 commit 68f5a19

File tree

3 files changed

+6
-1
lines changed

3 files changed

+6
-1
lines changed

articles/api-management/api-management-howto-provision-self-hosted-gateway.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,13 +36,15 @@ Complete the following quickstart: [Create an Azure API Management instance](get
3636
3. Enter the **Name** and **Region** of the gateway.
3737
> [!TIP]
3838
> **Region** specifies intended location of the gateway nodes that will be associated with this gateway resource. It's semantically equivalent to a similar property associated with any Azure resource, but can be assigned an arbitrary string value.
39+
3940
4. Optionally, enter a **Description** of the gateway resource.
4041
5. Optionally, select **+** under **APIs** to associate one or more APIs with this gateway resource.
4142
> [!TIP]
4243
> You can associate and remove an API from a gateway on the API's **Settings** tab.
4344
4445
> [!IMPORTANT]
4546
> By default, none of the existing APIs will be associated with the new gateway resource. Therefore, attempts to invoke them via the new gateway will result in `404 Resource Not Found` responses.
47+
4648
6. Click **Add**.
4749

4850
Now the gateway resource has been provisioned in your API Management instance. You can proceed to deploy the gateway.

articles/hdinsight/hdinsight-high-availability-components.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ The second Zookeeper quorum is independent of the first quorum, so the active Na
117117

118118
HDInsight clusters based on Apache Hadoop 2.4 or higher, support YARN ResourceManager high availability. There are two ResourceManagers, rm1 and rm2, running on headnode 0 and headnode 1, respectively. Like NameNode, YARN ResourceManager is also configured for automatic failover. Another ResourceManager is automatically elected to be active when the current active ResourceManager goes down or unresponsive.
119119

120-
YARN ResourceManager uses its embedded *ActiveStandbyElector* as a failure detector and leader elector. Unlike HDFS NodeManager, YARN ResourceManager doesn't need a separate ZKFC daemon. The active ResourceManager writes its states into Apache Zookeeper.
120+
YARN ResourceManager uses its embedded *ActiveStandbyElector* as a failure detector and leader elector. Unlike HDFS NameNode, YARN ResourceManager doesn't need a separate ZKFC daemon. The active ResourceManager writes its states into Apache Zookeeper.
121121

122122
The high availability of the YARN ResourceManager is independent from NameNode and other HDInsight HA services. The active ResourceManager may not run on the active headnode or the headnode where the active NameNode is running. For more information about YARN ResourceManager high availability, see [ResourceManager High Availability](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html).
123123

articles/storage/blobs/storage-lifecycle-management-concepts.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -428,6 +428,9 @@ For data that is modified and accessed regularly throughout its lifetime, snapsh
428428
**I created a new policy, why do the actions not run immediately?**
429429
The platform runs the lifecycle policy once a day. Once you configure a policy, it can take up to 24 hours for some actions to run for the first time.
430430

431+
**If I update an existing policy, how long does it take for the actions to run?**
432+
The updated policy takes up to 24 hours to go into effect. Once the policy is in effect, it could take up to 24 hours for the actions to run. Therefore, the policy may take up to 48 hours to execute.
433+
431434
**I manually rehydrated an archived blob, how do I prevent it from being moved back to the Archive tier temporarily?**
432435
When a blob is moved from one access tier to another, its last modification time doesn't change. If you manually rehydrate an archived blob to hot tier, it would be moved back to archive tier by the lifecycle management engine. Disable the rule that affects this blob temporarily to prevent it from being archived again. Re-enable the rule when the blob can be safely moved back to archive tier. You may also copy the blob to another location if it needs to stay in hot or cool tier permanently.
433436

0 commit comments

Comments
 (0)