Skip to content

Commit d25c129

Browse files
felixhuettnermrhopeman
authored andcommitted
Reduce lock contention on subnets
HINT: This isn't a clean backport, as we keep the subnet in-use field. We can't backport the db update that would remove the field. in [1] a lock was introduced with the goal of preventing subnets from being deleted while ports are being created in them in parallel. This was acheived by aquiring an exclusive lock on the row of the subnet in the Subnet table when adding/modifying a port or deleting the subnet. However as this was a exclusive lock it also prevented concurrent port modifications on the same subnet from happening. This can cause performance issues on environment with large shared subnets (e.g. a large external subnet). To reduce the lock contention for this case we split the lock in two parts: * For normal port operations we will aquire a shared lock on the row of the subnet. This allows multiple such operations to happen in parallel. * For deleting a subnet we will aquire an exclusive lock on the row of the subnet. This lock can not be aquired when there is any shared lock currently on the row. With this we maintain the same locking level as before, but reduce the amount of lock contention happening and thereby improve throughput. The performance improvement can be measured using rally test [2]. (improving from 21 to 18 seconds). Alternatively it can be tested using 250 parallel curl calls to create a port in the same network. This improves from 113s to 42s. [1]: https://review.opendev.org/c/openstack/neutron/+/713045 [2]: https://github.com/openstack/rally-openstack/blob/master/samples/tasks/scenarios/neutron/create-and-delete-ports.json Closes-Bug: #2009055 Change-Id: I31b1a9c2f986f59fee0da265acebbd88d2f8e4f8 (cherry picked from commit c0af5b3)
1 parent 86bc376 commit d25c129

File tree

3 files changed

+41
-15
lines changed

3 files changed

+41
-15
lines changed

neutron/db/db_base_plugin_v2.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@
7575

7676

7777
def _ensure_subnet_not_used(context, subnet_id):
78-
models_v2.Subnet.lock_register(
78+
models_v2.Subnet.write_lock_register(
7979
context, exc.SubnetInUse(subnet_id=subnet_id), id=subnet_id)
8080
try:
8181
registry.publish(

neutron/db/ipam_backend_mixin.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -683,7 +683,7 @@ def _ipam_get_subnets(self, context, network_id, host, service_type=None,
683683
msg = ('This subnet is being modified by another concurrent '
684684
'operation')
685685
for subnet in subnets:
686-
subnet.lock_register(
686+
subnet.read_lock_register(
687687
context, exc.SubnetInUse(subnet_id=subnet.id, reason=msg),
688688
id=subnet.id)
689689
subnet_dicts = [self._make_subnet_dict(subnet, context=context)

neutron/db/models_v2.py

Lines changed: 39 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -33,25 +33,51 @@
3333
class HasInUse(object):
3434
"""NeutronBaseV2 mixin, to add the flag "in_use" to a DB model.
3535
36-
The content of this flag (boolean) parameter is not relevant. The goal of
37-
this field is to be used in a write transaction to mark a DB register as
38-
"in_use". Writing any value on this DB parameter will lock the container
39-
register. At the end of the DB transaction, the DB engine will check if
40-
this register was modified or deleted. In such case, the transaction will
41-
fail and won't be committed.
42-
43-
"lock_register" is the method to write the register "in_use" column.
44-
Because the lifespan of this DB lock is the DB transaction, there isn't an
45-
unlock method. The lock will finish once the transaction ends.
36+
The goal of this class is to allow users lock specific database rows with
37+
a shared or exclusive lock (without necessarily introducing a change in
38+
the table itself). Having these locks allows the DB engine to prevent
39+
concurrent modifications (e.g. the deletion of a resource while we are
40+
currently adding a new dependency on the resource).
41+
42+
"read_lock_register" takes a shared DB lock on the row specified by the
43+
filters. The lock is automatically released once the transaction ends.
44+
You can have any number of parallel read locks on the same DB row. But
45+
you can not have any write lock in parallel.
46+
47+
"write_lock_register" takes an exclusive DB lock on the row specified by
48+
the filters. The lock is automatically released on transaction commit.
49+
You may only have one write lock on each row at a time. It therefor
50+
blocks all other read and write locks to this row.
4651
"""
52+
# keep this value to not need to update the database schema
53+
# only at backport
4754
in_use = sa.Column(sa.Boolean(), nullable=False,
4855
server_default=sql.false(), default=False)
4956

5057
@classmethod
51-
def lock_register(cls, context, exception, **filters):
58+
def write_lock_register(cls, context, exception, **filters):
59+
# we use `with_for_update()` to include `FOR UPDATE` in the sql
60+
# statement.
61+
# we need to set `enable_eagerloads(False)` so that we do not try to
62+
# load attached resources (e.g. standardattributes) as this breaks the
63+
# `FOR UPDATE` statement.
5264
num_reg = context.session.query(
53-
cls).filter_by(**filters).update({'in_use': True})
54-
if num_reg != 1:
65+
cls).filter_by(**filters).enable_eagerloads(
66+
False).with_for_update().first()
67+
if num_reg is None:
68+
raise exception
69+
70+
@classmethod
71+
def read_lock_register(cls, context, exception, **filters):
72+
# we use `with_for_update(read=True)` to include `LOCK IN SHARE MODE`
73+
# in the sql statement.
74+
# we need to set `enable_eagerloads(False)` so that we do not try to
75+
# load attached resources (e.g. standardattributes) as this breaks the
76+
# `LOCK IN SHARE MODE` statement.
77+
num_reg = context.session.query(
78+
cls).filter_by(**filters).enable_eagerloads(
79+
False).with_for_update(read=True).first()
80+
if num_reg is None:
5581
raise exception
5682

5783

0 commit comments

Comments
 (0)