Skip to content

Commit b2efa4d

Browse files
authored
Merge pull request #86885 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents 221d57a + 0287f1b commit b2efa4d

File tree

6 files changed

+7
-7
lines changed

6 files changed

+7
-7
lines changed

articles/active-directory/manage-apps/application-provisioning-when-will-provisioning-finish-specific-user.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ The provisioning audit logs record all the operations performed by the provision
5959
For more information on how to read the audit logs in the Azure portal, see the [provisioning reporting guide](check-status-user-account-provisioning.md).
6060

6161
## How long will it take to provision users?
62-
When using automatic user provisioning with an application, Azure AD automatically provisions and updates user accounts in an app based on things like [user and group assignment](https://docs.microsoft.com/azure/active-directory/active-directory-coreapps-assign-user-azure-portal) at a regularly scheduled time interval, typically every 10 minutes.
62+
When using automatic user provisioning with an application, Azure AD automatically provisions and updates user accounts in an app based on things like [user and group assignment](https://docs.microsoft.com/azure/active-directory/active-directory-coreapps-assign-user-azure-portal) at a regularly scheduled time interval, typically every 40 minutes.
6363

6464
The time it takes for a given user to be provisioned depends mainly on whether your provisioning job is running an initial sync or an incremental sync.
6565

articles/blockchain/service/limits.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Changing the pricing tier between Basic and Standard after member creation is no
2929

3030
## Storage capacity
3131

32-
The maximum amount of storage that can be used per node for ledger data and logs is 1 terabyte.
32+
The maximum amount of storage that can be used per node for ledger data and logs is 1.8 terabytes.
3333

3434
Decreasing ledger and log storage size is not supported.
3535

articles/iot-edge/offline-capabilities.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -169,7 +169,7 @@ Or, you can configure the local storage directly in the deployment manifest. For
169169
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
170170
"createOptions": {
171171
"HostConfig": {
172-
"Binds":["<HostStoragePath>:<ModuleStoragePath"],
172+
"Binds":["<HostStoragePath>:<ModuleStoragePath>"],
173173
"PortBindings":{"5671/tcp":[{"HostPort":"5671"}],"8883/tcp":[{"HostPort":"8883"}],"443/tcp":[{"HostPort":"443"}]}}}
174174
},
175175
"type": "docker",

articles/sql-database/sql-database-service-tiers-vcore.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ The following table explains the differences between the three tiers:
3737
||**General purpose**|**Business critical**|**Hyperscale**|
3838
|---|---|---|---|
3939
|Best for|Most business workloads. Offers budget-oriented, balanced, and scalable compute and storage options.|Business applications with high I/O requirements. Offers highest resilience to failures by using several isolated replicas.|Most business workloads with highly scalable storage and read-scale requirements.|
40-
|Compute|**Provisioned compute**:<br/>Gen4: 1 to 24 vCores<br/>Gen5: 2 to 80 vCores<br/>**Serverless compute**:<br/>Gen5: 0.5 - 4 vCores|**Provisioned compute**:<br/>Gen4: 1 to 24 vCores<br/>Gen5: 2 to 80 vCores|**Provisioned compute**:<br/>Gen4: 1 to 24 vCores<br/>Gen5: 2 to 80 vCores|
40+
|Compute|**Provisioned compute**:<br/>Gen4: 1 to 24 vCores<br/>Gen5: 2 to 80 vCores<br/>**Serverless compute**:<br/>Gen5: 0.5 - 16 vCores|**Provisioned compute**:<br/>Gen4: 1 to 24 vCores<br/>Gen5: 2 to 80 vCores|**Provisioned compute**:<br/>Gen4: 1 to 24 vCores<br/>Gen5: 2 to 80 vCores|
4141
|Memory|**Provisioned compute**:<br/>Gen4: 7 GB per vCore<br/>Gen5: 5.1 GB per vCore<br/>**Serverless compute**:<br/>Gen5: 3 GB per vCore|**Provisioned compute**:<br/>Gen4: 7 GB per vCore<br/>Gen5: 5.1 GB per vCore |**Provisioned compute**:<br/>Gen4: 7 GB per vCore<br/>Gen5: 5.1 GB per vCore|
4242
|Storage|Uses remote storage.<br/>**Single database provisioned compute**:<br/>5 GB – 4 TB<br/>**Single database serverless compute**:<br/>5 GB - 1 TB<br/>**Managed instance**: 32 GB - 8 TB |Uses local SSD storage.<br/>**Single database provisioned compute**:<br/>5 GB – 4 TB<br/>**Managed instance**:<br/>32 GB - 4 TB |Flexible autogrow of storage as needed. Supports up to 100 TB of storage. Uses local SSD storage for local buffer-pool cache and local data storage. Uses Azure remote storage as final long-term data store. |
4343
|I/O throughput (approximate)|**Single database**: 500 IOPS per vCore with 7000 maximum IOPS.<br/>**Managed instance**: Depends on [size of file](../virtual-machines/windows/premium-storage-performance.md#premium-storage-disk-sizes).|5000 IOPS per core with 200,000 maximum IOPS|Hyperscale is a multi-tiered architecture with caching at multiple levels. Effective IOPs will depend on the workload.|

includes/data-factory-file-format.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -394,7 +394,7 @@ To use Avro format in a Hive table, you can refer to [Apache Hive’s tutorial](
394394

395395
Note the following points:
396396

397-
* [Complex data types](http://avro.apache.org/docs/current/spec.html#schema_complex) are not supported (records, enums, arrays, maps, unions and fixed).
397+
* [Complex data types](https://avro.apache.org/docs/current/spec.html#schema_complex) are not supported (records, enums, arrays, maps, unions and fixed).
398398

399399
### Specifying OrcFormat
400400
If you want to parse the ORC files or write the data in ORC format, set the `format` `type` property to **OrcFormat**. You do not need to specify any properties in the Format section within the typeProperties section. Example:
@@ -414,7 +414,7 @@ If you want to parse the ORC files or write the data in ORC format, set the `for
414414
Note the following points:
415415

416416
* Complex data types are not supported (STRUCT, MAP, LIST, UNION)
417-
* ORC file has three [compression-related options](http://hortonworks.com/blog/orcfile-in-hdp-2-better-compression-better-performance/): NONE, ZLIB, SNAPPY. Data Factory supports reading data from ORC file in any of these compressed formats. It uses the compression codec is in the metadata to read the data. However, when writing to an ORC file, Data Factory chooses ZLIB, which is the default for ORC. Currently, there is no option to override this behavior.
417+
* ORC file has three [compression-related options](https://hortonworks.com/blog/orcfile-in-hdp-2-better-compression-better-performance/): NONE, ZLIB, SNAPPY. Data Factory supports reading data from ORC file in any of these compressed formats. It uses the compression codec is in the metadata to read the data. However, when writing to an ORC file, Data Factory chooses ZLIB, which is the default for ORC. Currently, there is no option to override this behavior.
418418

419419
### Specifying ParquetFormat
420420
If you want to parse the Parquet files or write the data in Parquet format, set the `format` `type` property to **ParquetFormat**. You do not need to specify any properties in the Format section within the typeProperties section. Example:

includes/virtual-machines-image-builder-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ The Azure Image Builder is a fully managed Azure service that is accessible by a
6969

7070
1. Create the Image Template as a .json file. This .json file contains information about the image source, customizations, and distribution. There are multiple examples in the [Azure Image Builder GitHub repository](https://github.com/danielsollondon/azvmimagebuilder/tree/master/quickquickstarts).
7171
1. Submit it to the service, this will create an Image Template artifact in the resource group you specify. In the background, Image Builder will download the source image or ISO, and scripts as needed. These are stored in a separate resource group that is automatically created in your subscription, in the format: IT_\<DestinationResourceGroup>_\<TemplateName>.
72-
1. Once the Image Template is created, you can then build the image. In the background Image Builder uses the template and source files to create a VM, network, and storage in the IT_\<DestinationResourceGroup>_\<TemplateName> resource group.
72+
1. Once the Image Template is created, you can then build the image. In the background Image Builder uses the template and source files to create a VM (D1v2), network, public IP and storage in the IT_\<DestinationResourceGroup>_\<TemplateName> resource group.
7373
1. As part of the image creation, Image builder distributes the image it according to the template, then deletes the additional resources in the IT_\<DestinationResourceGroup>_\<TemplateName> resource group that was created for the process.
7474

7575

0 commit comments

Comments
 (0)