You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/virtual-machines/workloads/oracle/oracle-reference-architecture.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -127,7 +127,7 @@ Sharding is a data tier pattern that was introduced in Oracle 12.2. It allows yo
127
127
128
128
Sharding is suitable for high throughput OLTP applications that can't afford any downtime. All rows with the same sharding key are always guaranteed to be on the same shard, thus increasing performance providing the high consistency. Applications that use sharding must have a well-defined data model and data distribution strategy (consistent hash, range, list, or composite) that primarily accesses data using a sharding key (for example, *customerId* or *accountNum*). Sharding also allows you to store particular sets of data closer to the end customers, thus helping you meet your performance and compliance requirements.
129
129
130
-
It is recommended that you replicate your shards for high availability and disaster recovery. This setup can be done using Oracle technologies such as Oracle Data Guard or Oracle GoldenGate. A unit of replication can be a shard, a part of a shard, or a group of shards. The availability of a sharded database is not affected by an outage or slowdown of one or more shards. For high availability, the standby shards can be placed in the same availability zone where the primary shards are placed. For disaster recovery, the standby shards can be located in another region. You may also deploy shards in multiple regions to serve traffic in those regions. Read more about configuring high availability and replication of your sharded database in [Oracle Sharding documentation](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-high-availability.html).
130
+
It is recommended that you replicate your shards for high availability and disaster recovery. This setup can be done using Oracle technologies such as Oracle Data Guard or Oracle GoldenGate. A unit of replication can be a shard, a part of a shard, or a group of shards. The availability of a sharded database is not affected by an outage or slowdown of one or more shards. For high availability, the standby shards can be placed in the same availability zone where the primary shards are placed. For disaster recovery, the standby shards can be located in another region. You may also deploy shards in multiple regions to serve traffic in those regions. Read more about configuring high availability and replication of your sharded database in [Oracle Sharding documentation](https://docs.oracle.com/en/database/oracle/oracle-database/18/shard/sharding-high-availability.html).
131
131
132
132
Oracle Sharding primarily consists of the following components. More information about these components can be found in [Oracle Sharding documentation](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-overview.html):
133
133
@@ -156,11 +156,11 @@ There are different ways to shard a database:
156
156
* Composite sharding - A combination of system-managed and user-defined sharding for different _shardspaces_
157
157
* Table subpartitions - Similar to a regular partitioned table.
158
158
159
-
Read more about the different [sharding methods](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-methods.html) in Oracle's documentation.
159
+
Read more about the different [sharding methods](https://docs.oracle.com/en/database/oracle/oracle-database/18/shard/sharding-methods.html) in Oracle's documentation.
160
160
161
161
While a sharded database may look like a single database to applications and developers, when migrating from a non-sharded database onto a sharded database, careful planning is required to determine which tables will be duplicated versus sharded.
162
162
163
-
Duplicated tables are stored on all shards, whereas sharded tables are distributed across different shards. The recommendation is to duplicate small and dimensional tables and distribute/shard the fact tables. Data can be loaded into your sharded database using either the shard catalog as the central coordinator or by running Data Pump on each shard. Read more about [migrating data to a sharded database](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-loading-data.html) in Oracle's documentation.
163
+
Duplicated tables are stored on all shards, whereas sharded tables are distributed across different shards. The recommendation is to duplicate small and dimensional tables and distribute/shard the fact tables. Data can be loaded into your sharded database using either the shard catalog as the central coordinator or by running Data Pump on each shard. Read more about [migrating data to a sharded database](https://docs.oracle.com/en/database/oracle/oracle-database/18/shard/sharding-loading-data.html) in Oracle's documentation.
164
164
165
165
#### Oracle Sharding with Data Guard
166
166
@@ -192,7 +192,7 @@ The way the data gets replicated depends on the replication factor. With a repli
192
192
193
193
In the preceding architecture, shardgroup A and shardgroup B both contain the same data but reside in different availability zones. If both shardgroup A and shardgroup B have the same replication factor of 3, each row/chunk of your sharded table will be replicated 6 times across the two shardgroups. If shardgroup A has a replication factor of 3 and shardgroup B has a replication factor of 2, each row/chunk will be replicated 5 times across the two shardgroups.
194
194
195
-
This setup prevents data loss if an instance-level or availability zone-level failure occurs. The application layer is able to read from and write to each shard. To minimize conflicts, Oracle Sharding designates a "master chunk" for each range of hash values. This feature ensures that writes requests for a particular chunk are directed to the corresponding chunk. In addition, Oracle GoldenGate provides automatic conflict detection and resolution to handle any conflicts that may arise. For more information and limitations of implementing GoldenGate with Oracle Sharding, see Oracle's documentation on using [Oracle GoldenGate with a sharded database](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-high-availability.html#GUID-4FC0AC46-0B8B-4670-BBE4-052228492C72).
195
+
This setup prevents data loss if an instance-level or availability zone-level failure occurs. The application layer is able to read from and write to each shard. To minimize conflicts, Oracle Sharding designates a "master chunk" for each range of hash values. This feature ensures that writes requests for a particular chunk are directed to the corresponding chunk. In addition, Oracle GoldenGate provides automatic conflict detection and resolution to handle any conflicts that may arise. For more information and limitations of implementing GoldenGate with Oracle Sharding, see Oracle's documentation on using [Oracle GoldenGate with a sharded database](https://docs.oracle.com/en/database/oracle/oracle-database/18/shard/sharding-high-availability.html#GUID-4FC0AC46-0B8B-4670-BBE4-052228492C72).
196
196
197
197
In the preceding architecture, a GSM/shard director is deployed in every availability zone for high availability. The recommendation is to deploy at least one GSM/shard director per data center or region. Additionally, an instance of the application server is deployed in every availability zone that contains a shardgroup. This setup allows the application to keep the latency between the application server and the database/shardgroup low. If a database fails, the application server in the same zone as the standby database can handle requests once the database role transitions. Azure Application Gateway and the shard director keep track of the request and response latency and route requests accordingly.
0 commit comments