Skip to content

Commit fe2ba09

Browse files
committed
Replaced lists to use asterisks.
1 parent 095ff1a commit fe2ba09

File tree

1 file changed

+17
-16
lines changed
  • docs/proposals/1374-mc-inference-gateways

1 file changed

+17
-16
lines changed

docs/proposals/1374-mc-inference-gateways/README.md

Lines changed: 17 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -88,14 +88,14 @@ In the future, a more advanced implementation could allow Endpoint Pickers to p
8888

8989
**Pros**:
9090

91-
- Reuses existing MCS model
92-
- Simplest possible API model
93-
- “Export” configuration lives on InferencePool and clearly applies to the entire pool, not just EPP
94-
- Can clearly reference an InferencePool in other clusters without having one locally
91+
* Reuses existing MCS model
92+
* Simplest possible API model
93+
* “Export” configuration lives on InferencePool and clearly applies to the entire pool, not just EPP
94+
* Can clearly reference an InferencePool in other clusters without having one locally
9595

9696
**Cons**:
9797

98-
- Does not reuse MCS API (unclear if this is a con)
98+
* Does not reuse MCS API (unclear if this is a con)
9999

100100
## Alternative 1: MCS API for EPP
101101

@@ -105,15 +105,16 @@ If we lean into the idea that the only thing a Gateway needs to know is the Endp
105105

106106
**Pros**:
107107

108-
- Reuses existing MCS infrastructure.
109-
- Likely relatively simple to implement.
108+
* Reuses existing MCS infrastructure.
109+
* Likely relatively simple to implement.
110110

111111
**Cons**:
112112

113-
- Referencing InferencePools in other clusters requires you to create an InferencePool locally.
114-
- Significantly more complex configuration (more YAML at least).
115-
- "FailOpen" mode becomes ~impossible if implementations don't actually have some model server endpoints to fall back to.
116-
- In this model, you don’t actually choose to export an InferencePool, you export the Endpoint Picker, that could lead to significant confusion.
113+
* Referencing InferencePools in other clusters requires you to create an InferencePool locally.
114+
* Significantly more complex configuration (more YAML at least).
115+
* "FailOpen" mode becomes ~impossible if implementations don't actually have some model server endpoints to fall back to.
116+
* In this model, you don’t actually choose to export an InferencePool, you export the Endpoint Picker, that could lead to significant confusion.
117+
* InferencePool is meant to be a replacement for a Service so it may seem counterintuitive for a user to create a Service to achieve multi-cluster inference.
117118

118119
## Alternative 2: New MCS API
119120

@@ -155,11 +156,11 @@ Can we find a way to configure preferences for where a request should be routed?
155156
156157
### Prior Art
157158
158-
- [GEP-1748: Gateway API Interaction with Multi-Cluster Services](https://gateway-api.sigs.k8s.io/geps/gep-1748/)
159-
- [Envoy Gateway with Multi-Cluster Services](https://gateway.envoyproxy.io/latest/tasks/traffic/multicluster-service/)
160-
- [Multicluster Service API](https://multicluster.sigs.k8s.io/concepts/multicluster-services-api/)
161-
- [Submariner](https://submariner.io/)
159+
* [GEP-1748: Gateway API Interaction with Multi-Cluster Services](https://gateway-api.sigs.k8s.io/geps/gep-1748/)
160+
* [Envoy Gateway with Multi-Cluster Services](https://gateway.envoyproxy.io/latest/tasks/traffic/multicluster-service/)
161+
* [Multicluster Service API](https://multicluster.sigs.k8s.io/concepts/multicluster-services-api/)
162+
* [Submariner](https://submariner.io/)
162163
163164
### References
164165
165-
- [Original Doc for MultiCluster Inference Gateway](https://docs.google.com/document/d/1QGvG9ToaJ72vlCBdJe--hmrmLtgOV_ptJi9D58QMD2w/edit?tab=t.0#heading=h.q6xiq2fzcaia)
166+
* [Original Doc for MultiCluster Inference Gateway](https://docs.google.com/document/d/1QGvG9ToaJ72vlCBdJe--hmrmLtgOV_ptJi9D58QMD2w/edit?tab=t.0#heading=h.q6xiq2fzcaia)

0 commit comments

Comments
 (0)