Skip to content

Commit dc95000

Browse files
ctauchenclaude
andcommitted
fix: convert definition lists to HTML dl elements
MDX/Docusaurus does not support the kramdown-style definition list syntax (term\n: definition). Convert all instances to standard HTML <dl>/<dt>/<dd> elements which render correctly in MDX without requiring any additional remark plugins. Affected files: - calico/networking/openstack/neutron-api.mdx - calico-cloud/reference/architecture/design/l3-interconnect-fabric.mdx - versioned copies of the above (3.29, 3.30, 3.31, calico-cloud-22-2) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent ec0eec1 commit dc95000

File tree

6 files changed

+164
-130
lines changed

6 files changed

+164
-130
lines changed

calico-cloud/reference/architecture/design/l3-interconnect-fabric.mdx

Lines changed: 22 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -142,33 +142,28 @@ comfortable with advanced BGP design.
142142

143143
These considerations are:
144144

145-
AS continuity
146-
147-
: or _AS puddling_ Any router in an AS _must_ be able to communicate
148-
with any other router in that same AS without transiting another AS.
149-
150-
Next hop behavior
151-
152-
: By default BGP routers do not change the _next hop_ of a route if it
153-
is peering with another router in its same AS. The inverse is also
154-
true, a BGP router will set itself as the _next hop_ of a route if
155-
it is peering with a router in another AS.
156-
157-
Route reflection
158-
159-
: All BGP routers in a given AS must _peer_ with all the other routers
160-
in that AS. This is referred to a _complete BGP mesh_. This can
161-
become problematic as the number of routers in the AS scales up. The
162-
use of _route reflectors_ reduce the need for the complete BGP mesh.
163-
However, route reflectors also have scaling considerations.
164-
165-
Endpoints
166-
167-
: In a $[prodname] network, each endpoint is a route. Hardware networking
168-
platforms are constrained by the number of routes they can learn.
169-
This is usually in range of 10,000's or 100,000's of routes. Route
170-
aggregation can help, but that is usually dependent on the
171-
capabilities of the scheduler used by the orchestration software.
145+
<dl>
146+
<dt>AS continuity</dt>
147+
<dd>or <em>AS puddling</em> Any router in an AS <em>must</em> be able to communicate
148+
with any other router in that same AS without transiting another AS.</dd>
149+
<dt>Next hop behavior</dt>
150+
<dd>By default BGP routers do not change the <em>next hop</em> of a route if it
151+
is peering with another router in its same AS. The inverse is also
152+
true, a BGP router will set itself as the <em>next hop</em> of a route if
153+
it is peering with a router in another AS.</dd>
154+
<dt>Route reflection</dt>
155+
<dd>All BGP routers in a given AS must <em>peer</em> with all the other routers
156+
in that AS. This is referred to a <em>complete BGP mesh</em>. This can
157+
become problematic as the number of routers in the AS scales up. The
158+
use of <em>route reflectors</em> reduce the need for the complete BGP mesh.
159+
However, route reflectors also have scaling considerations.</dd>
160+
<dt>Endpoints</dt>
161+
<dd>In a $[prodname] network, each endpoint is a route. Hardware networking
162+
platforms are constrained by the number of routes they can learn.
163+
This is usually in range of 10,000's or 100,000's of routes. Route
164+
aggregation can help, but that is usually dependent on the
165+
capabilities of the scheduler used by the orchestration software.</dd>
166+
</dl>
172167

173168
A deeper discussion of these considerations can be found in the IP
174169
Fabric Design Considerations\_ appendix.

calico-cloud_versioned_docs/version-22-2/reference/architecture/design/l3-interconnect-fabric.mdx

Lines changed: 26 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -142,33 +142,28 @@ comfortable with advanced BGP design.
142142

143143
These considerations are:
144144

145-
AS continuity
146-
147-
: or _AS puddling_ Any router in an AS _must_ be able to communicate
148-
with any other router in that same AS without transiting another AS.
149-
150-
Next hop behavior
151-
152-
: By default BGP routers do not change the _next hop_ of a route if it
153-
is peering with another router in its same AS. The inverse is also
154-
true, a BGP router will set itself as the _next hop_ of a route if
155-
it is peering with a router in another AS.
156-
157-
Route reflection
158-
159-
: All BGP routers in a given AS must _peer_ with all the other routers
160-
in that AS. This is referred to a _complete BGP mesh_. This can
161-
become problematic as the number of routers in the AS scales up. The
162-
use of _route reflectors_ reduce the need for the complete BGP mesh.
163-
However, route reflectors also have scaling considerations.
164-
165-
Endpoints
166-
167-
: In a $[prodname] network, each endpoint is a route. Hardware networking
168-
platforms are constrained by the number of routes they can learn.
169-
This is usually in range of 10,000's or 100,000's of routes. Route
170-
aggregation can help, but that is usually dependent on the
171-
capabilities of the scheduler used by the orchestration software.
145+
<dl>
146+
<dt>AS continuity</dt>
147+
<dd>or <em>AS puddling</em> Any router in an AS <em>must</em> be able to communicate
148+
with any other router in that same AS without transiting another AS.</dd>
149+
<dt>Next hop behavior</dt>
150+
<dd>By default BGP routers do not change the <em>next hop</em> of a route if it
151+
is peering with another router in its same AS. The inverse is also
152+
true, a BGP router will set itself as the <em>next hop</em> of a route if
153+
it is peering with a router in another AS.</dd>
154+
<dt>Route reflection</dt>
155+
<dd>All BGP routers in a given AS must <em>peer</em> with all the other routers
156+
in that AS. This is referred to a <em>complete BGP mesh</em>. This can
157+
become problematic as the number of routers in the AS scales up. The
158+
use of <em>route reflectors</em> reduce the need for the complete BGP mesh.
159+
However, route reflectors also have scaling considerations.</dd>
160+
<dt>Endpoints</dt>
161+
<dd>In a $[prodname] network, each endpoint is a route. Hardware networking
162+
platforms are constrained by the number of routes they can learn.
163+
This is usually in range of 10,000's or 100,000's of routes. Route
164+
aggregation can help, but that is usually dependent on the
165+
capabilities of the scheduler used by the orchestration software.</dd>
166+
</dl>
172167

173168
A deeper discussion of these considerations can be found in the IP
174169
Fabric Design Considerations\_ appendix.
@@ -215,7 +210,9 @@ Within the rack, the configuration is the same for both variants, and is
215210
somewhat different than the configuration north of the ToR.
216211

217212
Every router within the rack, which, in the case of $[prodname] is every
213+
{/* vale Vale.Repetition = NO */}
218214
compute server, shares the same AS as the ToR that they are connected
215+
{/* vale Vale.Repetition = YES */}
219216
to. That connection is in the form of an Ethernet switching layer. Each
220217
router in the rack must be directly connected to enable the AS to remain
221218
contiguous. The ToR's _router_ function is then connected to that
@@ -407,8 +404,10 @@ for two _networks_ that are not directly connected, but only connected
407404
through another _network_ or AS number will not work without a lot of
408405
policy changes to the BGP routers.
409406

407+
{/* vale Vale.Repetition = NO */}
410408
Another corollary of that rule is that a BGP router will not propagate a
411409
route to a peer if the route has an AS in its path that is the same AS
410+
{/* vale Vale.Repetition = YES */}
412411
as the peer. This prevents loops from forming in the network. The effect
413412
of this prevents two routers in the same AS from transiting another
414413
router (either in that AS or not).

calico/networking/openstack/neutron-api.mdx

Lines changed: 20 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -72,10 +72,11 @@ In $[prodname], these roles for the Neutron subnet are preserved in their
7272
entirety. All properties associated with these Neutron subnets are
7373
preserved and remain meaningful except for:
7474

75-
`host_routes`
76-
77-
: These have no effect, as the compute nodes will route traffic
78-
immediately after it egresses the VM.
75+
<dl>
76+
<dt><code>host_routes</code></dt>
77+
<dd>These have no effect, as the compute nodes will route traffic
78+
immediately after it egresses the VM.</dd>
79+
</dl>
7980

8081
## Ports
8182

@@ -86,27 +87,27 @@ shared layer 3 network that $[prodname] builds in Neutron.
8687

8788
All properties on a port work as normal, except for the following:
8889

89-
`network_id`
90-
91-
: The network ID still controls which Neutron network the port is
92-
attached to, and therefore still controls which Neutron subnets it
93-
will be placed in. However, as per the [note above](#networks),
94-
the Neutron network that a port is placed in does not affect which
95-
machines in the deployment it can contact.
90+
<dl>
91+
<dt><code>network_id</code></dt>
92+
<dd>The network ID still controls which Neutron network the port is
93+
attached to, and therefore still controls which Neutron subnets it
94+
will be placed in. However, as per the <a href="#networks">note above</a>,
95+
the Neutron network that a port is placed in does not affect which
96+
machines in the deployment it can contact.</dd>
97+
</dl>
9698

9799
### Extended Attributes: Port Binding Attributes
98100

99101
The `binding:host-id` attribute works as normal. The following notes
100102
apply to the other attributes:
101103

102-
`binding:profile`
103-
104-
: This is unused in $[prodname].
105-
106-
`binding:vnic_type`
107-
108-
: This field, if used, **must** be set to `normal`. If set to any
109-
other value, $[prodname] will not correctly function!
104+
<dl>
105+
<dt><code>binding:profile</code></dt>
106+
<dd>This is unused in $[prodname].</dd>
107+
<dt><code>binding:vnic_type</code></dt>
108+
<dd>This field, if used, <strong>must</strong> be set to <code>normal</code>. If set to any
109+
other value, $[prodname] will not correctly function!</dd>
110+
</dl>
110111

111112
## Quotas
112113

calico_versioned_docs/version-3.29/networking/openstack/neutron-api.mdx

Lines changed: 56 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -72,10 +72,11 @@ In $[prodname], these roles for the Neutron subnet are preserved in their
7272
entirety. All properties associated with these Neutron subnets are
7373
preserved and remain meaningful except for:
7474

75-
`host_routes`
76-
77-
: These have no effect, as the compute nodes will route traffic
78-
immediately after it egresses the VM.
75+
<dl>
76+
<dt><code>host_routes</code></dt>
77+
<dd>These have no effect, as the compute nodes will route traffic
78+
immediately after it egresses the VM.</dd>
79+
</dl>
7980

8081
## Ports
8182

@@ -86,27 +87,27 @@ shared layer 3 network that $[prodname] builds in Neutron.
8687

8788
All properties on a port work as normal, except for the following:
8889

89-
`network_id`
90-
91-
: The network ID still controls which Neutron network the port is
92-
attached to, and therefore still controls which Neutron subnets it
93-
will be placed in. However, as per the [note above](#networks),
94-
the Neutron network that a port is placed in does not affect which
95-
machines in the deployment it can contact.
90+
<dl>
91+
<dt><code>network_id</code></dt>
92+
<dd>The network ID still controls which Neutron network the port is
93+
attached to, and therefore still controls which Neutron subnets it
94+
will be placed in. However, as per the <a href="#networks">note above</a>,
95+
the Neutron network that a port is placed in does not affect which
96+
machines in the deployment it can contact.</dd>
97+
</dl>
9698

9799
### Extended Attributes: Port Binding Attributes
98100

99101
The `binding:host-id` attribute works as normal. The following notes
100102
apply to the other attributes:
101103

102-
`binding:profile`
103-
104-
: This is unused in $[prodname].
105-
106-
`binding:vnic_type`
107-
108-
: This field, if used, **must** be set to `normal`. If set to any
109-
other value, $[prodname] will not correctly function!
104+
<dl>
105+
<dt><code>binding:profile</code></dt>
106+
<dd>This is unused in $[prodname].</dd>
107+
<dt><code>binding:vnic_type</code></dt>
108+
<dd>This field, if used, <strong>must</strong> be set to <code>normal</code>. If set to any
109+
other value, $[prodname] will not correctly function!</dd>
110+
</dl>
110111

111112
## Quotas
112113

@@ -139,6 +140,42 @@ model. See [Detailed semantics](semantics.mdx) for a
139140
fuller explanation. Where isolation of a particular Neutron network is
140141
desired, we recommend expressing that through security group rules.
141142

143+
## QoS
144+
145+
Calico for OpenStack implements some Neutron QoS policy fields: the `max_kbps`
146+
and `max_burst_kbps` fields of bandwidth limit rules, and the `max_kpps` field
147+
of packet rate limit rules. Calico also honours the `direction` field of these
148+
rules, so these limits can be set independently for both ingress and egress
149+
directions.
150+
151+
:::note
152+
153+
There is uncertainty as to whether `max_burst_kbps` is intended to configure
154+
the burst *rate* or the burst *size*. Calico interprets it as the burst *rate*
155+
and honours `neutron.conf` fields for configuring the burst *size*.
156+
157+
:::
158+
159+
There are also new Calico Neutron driver settings (cluster-wide, set in `neutron.conf`):
160+
161+
- `[calico] max_ingress_connections_per_port` for imposing a maximum number of
162+
ingress connections per Neutron port, and
163+
164+
- `[calico] max_egress_connections_per_port` for imposing a maximum number of
165+
egress connections per Neutron port.
166+
167+
- `[calico] ingress_burst_kbits`, if non-zero, configures the maximum allowed
168+
burst at peakrate, in the ingress direction.
169+
170+
- `[calico] egress_burst_kbits`, if non-zero, configures the maximum allowed
171+
burst at peakrate, in the egress direction.
172+
173+
- `[calico] ingress_minburst_bytes`, if non-zero, configures the minimum burst
174+
size for peakrate data, in the ingress direction.
175+
176+
- `[calico] egress_minburst_bytes`, if non-zero, configures the minimum burst
177+
size for peakrate data, in the egress direction.
178+
142179
## Load Balancer as a Service
143180

144181
Load Balancer as a Service (LBaaS) does not function in a $[prodname] network. Any

calico_versioned_docs/version-3.30/networking/openstack/neutron-api.mdx

Lines changed: 20 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -72,10 +72,11 @@ In $[prodname], these roles for the Neutron subnet are preserved in their
7272
entirety. All properties associated with these Neutron subnets are
7373
preserved and remain meaningful except for:
7474

75-
`host_routes`
76-
77-
: These have no effect, as the compute nodes will route traffic
78-
immediately after it egresses the VM.
75+
<dl>
76+
<dt><code>host_routes</code></dt>
77+
<dd>These have no effect, as the compute nodes will route traffic
78+
immediately after it egresses the VM.</dd>
79+
</dl>
7980

8081
## Ports
8182

@@ -86,27 +87,27 @@ shared layer 3 network that $[prodname] builds in Neutron.
8687

8788
All properties on a port work as normal, except for the following:
8889

89-
`network_id`
90-
91-
: The network ID still controls which Neutron network the port is
92-
attached to, and therefore still controls which Neutron subnets it
93-
will be placed in. However, as per the [note above](#networks),
94-
the Neutron network that a port is placed in does not affect which
95-
machines in the deployment it can contact.
90+
<dl>
91+
<dt><code>network_id</code></dt>
92+
<dd>The network ID still controls which Neutron network the port is
93+
attached to, and therefore still controls which Neutron subnets it
94+
will be placed in. However, as per the <a href="#networks">note above</a>,
95+
the Neutron network that a port is placed in does not affect which
96+
machines in the deployment it can contact.</dd>
97+
</dl>
9698

9799
### Extended Attributes: Port Binding Attributes
98100

99101
The `binding:host-id` attribute works as normal. The following notes
100102
apply to the other attributes:
101103

102-
`binding:profile`
103-
104-
: This is unused in $[prodname].
105-
106-
`binding:vnic_type`
107-
108-
: This field, if used, **must** be set to `normal`. If set to any
109-
other value, $[prodname] will not correctly function!
104+
<dl>
105+
<dt><code>binding:profile</code></dt>
106+
<dd>This is unused in $[prodname].</dd>
107+
<dt><code>binding:vnic_type</code></dt>
108+
<dd>This field, if used, <strong>must</strong> be set to <code>normal</code>. If set to any
109+
other value, $[prodname] will not correctly function!</dd>
110+
</dl>
110111

111112
## Quotas
112113

0 commit comments

Comments
 (0)