You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/operator-service-manager/best-practices-onboard-deploy.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -280,7 +280,7 @@ As the first step towards cleaning up a deployed environment, start by deleting
280
280
- Site
281
281
- CGV
282
282
283
-
Only once these operator resources are succesfully deleted, should a user proceed to delete other environment resources, such as the NAKS cluster.
283
+
Only once these operator resources are successfully deleted, should a user proceed to delete other environment resources, such as the NAKS cluster.
284
284
285
285
> [!IMPORTANT]
286
286
> Deleting resources out of order can result in orphaned resources left behind.
@@ -305,7 +305,7 @@ As the first step towards cleaning up an onboarded environment, start by deletin
305
305
## NfApp Sequential Ordering Behavior
306
306
307
307
### Overview
308
-
By default, containerized network function applications (NfApps) are installed or updated based on the sequential order in which they appear in the network function design version (NFDV). For delete, the NfApps are deleted in the reverse order sepcified. Where a publisher needs to define specific ordering of NfApps, different from the default, a dependsOnProfile is used to define a unique sequence for install, update and delete operations.
308
+
By default, containerized network function applications (NfApps) are installed or updated based on the sequential order in which they appear in the network function design version (NFDV). For delete, the NfApps are deleted in the reverse order specified. Where a publisher needs to define specific ordering of NfApps, different from the default, a dependsOnProfile is used to define a unique sequence for install, update and delete operations.
309
309
310
310
### How to use dependsOnProfile
311
311
A publisher can use the dependsOnProfile in the NFDV to control the sequence of helm executions for NfApps. Given the following example, on install operation the NfApps will be deployed in the following order: dummyApplication1, dummyApplication2, then dummyApplication. On update operation, the NfApps will be updated in the following order: dummyApplication2, dummyApplication1, then dummyApplication. On delete operation, the NfApps will be deleted in the following order: dummyApplication2, dummyApplication1, then dummyApplication.
@@ -374,10 +374,10 @@ As of today, if dependsOnProfile provided in the NFDV is invalid, the NF operati
374
374
}
375
375
```
376
376
## injectArtifactStoreDetails considerations
377
-
In some cases, third-party helm charts maynot be fully compliant with AOSM requirements for registryURL. In this case, the injectArtifactStoreDetails feature can be used to avoid making changes to helm packages.
377
+
In some cases, third-party helm charts may not be fully compliant with AOSM requirements for registryURL. In this case, the injectArtifactStoreDetails feature can be used to avoid making changes to helm packages.
378
378
379
379
### How to enable
380
-
To use injectArtifactStoreDetails, set the installOptions parameter in the NF resource roleOrverrides section to true, then use whatever registryURL value is needed to keep the registry URL valid. See following example of injectArtifactStoreDetails parameter enabled.
380
+
To use injectArtifactStoreDetails, set the installOptions parameter in the NF resource roleOverrides section to true, then use whatever registryURL value is needed to keep the registry URL valid. See following example of injectArtifactStoreDetails parameter enabled.
Copy file name to clipboardExpand all lines: articles/operator-service-manager/quickstart-publish-containerized-network-function-definition.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -120,7 +120,7 @@ Execute the following command to publish the Network Function Definition (NFD) a
120
120
>
121
121
>If you get an error saying "**A private publisher resource with the name 'nginx-publisher' already exists in the provided region**", edit the `publisher_name` field in the config file so that it is unique (e.g. add a random string suffix), re-run the `build` command (above), and then re-run this `publish` command.
122
122
>
123
-
>If you go on to create a network service design, you will need to use this new pubilsher name in the `resource_element_templates` array.
123
+
>If you go on to create a network service design, you will need to use this new publisher name in the `resource_element_templates` array.
124
124
125
125
```azurecli
126
126
az aosm nfd publish -b cnf-cli-output --definition-type cnf
Copy file name to clipboardExpand all lines: articles/operator-service-manager/quickstart-publish-virtualized-network-function-definition.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -162,7 +162,7 @@ Execute the following command to publish the Network Function Definition (NFD) a
162
162
>
163
163
>If you get an error saying "**A private publisher resource with the name 'ubuntu-publisher' already exists in the provided region**", edit the `publisher_name` field in the config file so that it is unique (e.g. add a random string suffix), re-run the `build` command (above), and then re-run this `publish` command.
164
164
>
165
-
>If you go on to create a network service design, you will need to use this new pubilsher name in the `resource_element_templates` array.
165
+
>If you go on to create a network service design, you will need to use this new publisher name in the `resource_element_templates` array.
166
166
167
167
```azurecli
168
168
az aosm nfd publish --build-output-folder vnf-cli-output --definition-type vnf
This guide describes the Azure Operator Service Manager (AOSM) upgrade failure behavior features for container network functions (CNFs). These features, as part of the AOSM safe upgrade practices initiative, offer a choice between faster retries, with pause on failure, versus return to starting point, with rollback on failure.
15
15
16
16
## Pause on failure
17
-
Any upgrade using AOSM starts with a site network service (SNS) reput opreation. The reput operation processes the network function applications (NfApps) found in the network function design version (NFDV). The reput operation implements the following default logic:
17
+
Any upgrade using AOSM starts with a site network service (SNS) reput operation. The reput operation processes the network function applications (NfApps) found in the network function design version (NFDV). The reput operation implements the following default logic:
18
18
* NfApps are processed following either updateDependsOn ordering, or in the sequential order they appear.
19
19
* NfApps with parameter "applicationEnabled" set to disable are skipped.
20
20
* NFApps present, but not referenced by the new NFDV are deleted.
Copy file name to clipboardExpand all lines: articles/oracle/oracle-db/oracle-database-delegated-subnet-limits.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.date: 08/01/2024
12
12
13
13
In this article, you learn about delegated subnet limits for Oracle Database@Azure.
14
14
15
-
Oracle Database@Azure infrastructure resources are connected to your Azure virtual network using a virtual NIC from your [delegated subnets](/azure/virtual-network/subnet-delegation-overview) (delegated to `Oracle.Database/networkAttachement`). By default, the Oracle Database@Azure service can use up to five delegated subnets. If you need more delegated subnet capacity, you can request a service limit increase.
15
+
Oracle Database@Azure infrastructure resources are connected to your Azure virtual network using a virtual NIC from your [delegated subnets](/azure/virtual-network/subnet-delegation-overview) (delegated to `Oracle.Database/networkAttachment`). By default, the Oracle Database@Azure service can use up to five delegated subnets. If you need more delegated subnet capacity, you can request a service limit increase.
Copy file name to clipboardExpand all lines: articles/storage/common/storage-account-overview.md
+41Lines changed: 41 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,6 +51,47 @@ When naming your storage account, keep these rules in mind:
51
51
- Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
52
52
- Your storage account name must be unique within Azure. No two storage accounts can have the same name.
53
53
54
+
## Storage account workloads
55
+
56
+
Azure Storage customers use a variety of workloads to store data, access it and derive insights to meet their business objectives. Each workload uses specific protocols for data operations based on its requirements as well as industry standards.
57
+
58
+
Below is a high-level categorization of different primary workloads for your storage accounts.
59
+
60
+
### Cloud native
61
+
62
+
Cloud native apps are large-scale distributed applications that are built on a foundation of cloud paradigms and technologies. This modern approach focuses on cloud scale and performance capabilities. Cloud native apps can be based on microservices architecture, use managed services, and employ continuous delivery to achieve reliability. These applications are typically categorized into web apps, mobile apps, containerized apps, and serverless/FaaS.
63
+
64
+
### Analytics
65
+
66
+
Analytics is the systematic, computational analysis of data and statistics. This science involves discovering, interpreting, and communication of meaningful insights/patterns found in data. The data discovered can be manipulated and interpreted in ways to further a business’s objectives and to help it meet its goals. These workloads typically consist of a pipeline ingesting large volumes of data that are prepped, curated, and aggregated for downstream consumption via Power BI, data warehouses or applications. Analytics workloads can require high ingress and egress, driving higher throughput on your storage account. Some different types of analytics include (but are not limited to) real-time analytics, advanced analytics, predictive analytics, emotional analytics, and sentiment analysis. For analytics, we guarantee that our customers have high throughput access to large amounts of data in distributed storage architectures.
67
+
68
+
### High-performance computing (HPC)
69
+
70
+
High-performance computing is the aggregation of multiple computing nodes acting on the same set of tasks to achieve more than that of a single node in a given time frame. It involves using powerful processors that work in parallel to process massive, multi-dimensional data sets. HPC workloads require very high throughput read and write operations for workloads like gene sequencing and reservoir simulation. HPC workloads also include applications with high IOPS and low latency access to a large number of small files for workloads like seismic interpretation, autonomous driving and risk workloads. The primary goal is to solve complex problems at ultra-fast speeds. Other examples of high-performance computing include fluid dynamics and other physical simulation or analysis which require scalability and high throughput. To enable our customers to perform HPC, we ensure that large amounts of data are accessible with a large amount of concurrency.
71
+
72
+
### Backup and archive
73
+
74
+
Business continuity and disaster recovery (BCDR) is a business’s ability to remain operational after an adverse event. In terms of storage, this objective equates to maintaining business continuity across outages to storage systems. With the introduction of Backup-as-a-Service offerings throughout the industry, BCDR data is increasingly migrating to the public cloud. The backup and archive workload functions as the last line of defense against rising ransomware and malicious attacks. When there is a service interruption or accidental deletion or corruption of data, recovering the data in an efficient and orchestrated manner is the highest priority. To accomplish this, Azure Storage makes it possible to store and retrieve large amounts of data in the most cost-effective fashion.
75
+
76
+
### Machine learning and artificial intelligence
77
+
78
+
Artificial intelligence (AI) is technology that simulates human intelligence and problem-solving capabilities in machines. Machine Learning (ML) is a sub-discipline of AI that uses algorithms to create models that enable machines to perform tasks. Both represent the newest workload on Azure which is growing at a rapid pace. This type of workload can be applied across every industry to improve metrics and meet performance goals. These types of technologies can lead to discoveries of life-saving drugs and practices in the field of medicine/health while also providing health assessments. Other everyday uses of ML and AI include fraud detection, image recognition, and the flagging of misinformation. These workloads typically need highly specialized compute (large numbers of GPU) and require high throughput and IOPS, low latency access to storage and POSIX file system access. Azure Storage supports these types of workloads by storing checkpoints and providing storage for large-scale datasets and models. These datasets and models read and write at a pace to keep GPUs utilized.
79
+
80
+
### Recommended workload configurations
81
+
The table below illustrates Microsoft's suggested storage account configurations for each workload
<sup>1</sup> Zone Redundant Storage (ZRS) is a good default for analytics workloads because ZRS offers additional redundancy compared to Locally Redundant Storage (LRS), protecting against zonal failures while remaining fully compatible with analytics frameworks. Customers that require additional redundancy can also leverage Geo-redundant Storage (GRS/RA-GRS) if additional redundancy is required for an Analytics workload.
92
+
<br/><br/><sup>2</sup> As a core capability of Azure Data Lake Storage (ADLS), the [hierarchical namespace](../blobs/data-lake-storage-namespace.md) enhances data organization and access efficiency for large amounts of data, making it ideal for analytics workloads.
93
+
<br/><br/><sup>3</sup> The cool access tier offers a cost-effective solution for storing infrequently accessed data, which is typical for a backup and archive workload. Customers can also consider the cold access tier after evaluating costs.
94
+
54
95
## Storage account endpoints
55
96
56
97
A storage account provides a unique namespace in Azure for your data. Every object that you store in Azure Storage has a URL address that includes your unique account name. The combination of the account name and the service endpoint forms the endpoints for your storage account.
0 commit comments