Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Feature: Explicit Model deployment
storageUri: gs://seldon-models/scv2/samples/mlserver_1.3.5/iris-sklearn
"""
When the model "alpha-1" should eventually become Ready with timeout "20s"
Then send HTTP inference request with timeout "20s" to model "alpha-1" with payload:
Then I send HTTP inference request with timeout "20s" to model "alpha-1" with payload:
"""
{
"inputs": [
Expand Down Expand Up @@ -52,7 +52,7 @@ Feature: Explicit Model deployment
}
] }
"""
Then send gRPC inference request with timeout "20s" to model "alpha-1" with payload:
Then I send gRPC inference request with timeout "20s" to model "alpha-1" with payload:
"""
{
"inputs": [
Expand Down Expand Up @@ -85,7 +85,7 @@ Feature: Explicit Model deployment
] }
"""
Then delete the model "alpha-1" with timeout "10s"
Then send HTTP inference request with timeout "20s" to model "alpha-1" with payload:
Then I send HTTP inference request with timeout "20s" to model "alpha-1" with payload:
"""
{
"inputs": [
Expand All @@ -99,7 +99,7 @@ Feature: Explicit Model deployment
}
"""
And expect http response status code "404"
Then send gRPC inference request with timeout "20s" to model "alpha-1" with payload:
Then I send gRPC inference request with timeout "20s" to model "alpha-1" with payload:
"""
{
"inputs": [
Expand Down
6 changes: 3 additions & 3 deletions tests/integration/godog/features/model/over_commit.feature
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Feature: Explicit Model deployment
storageUri: gs://seldon-models/scv2/samples/mlserver_1.3.5/iris-sklearn
"""
When the model "overcommit-3" should eventually become Ready with timeout "20s"
Then send HTTP inference request with timeout "20s" to model "overcommit-1" with payload:
Then I send HTTP inference request with timeout "20s" to model "overcommit-1" with payload:
"""
{
"inputs": [
Expand All @@ -65,7 +65,7 @@ Feature: Explicit Model deployment
}
"""
And expect http response status code "200"
Then send HTTP inference request with timeout "20s" to model "overcommit-2" with payload:
Then I send HTTP inference request with timeout "20s" to model "overcommit-2" with payload:
"""
{
"inputs": [
Expand All @@ -79,7 +79,7 @@ Feature: Explicit Model deployment
}
"""
And expect http response status code "200"
Then send HTTP inference request with timeout "20s" to model "overcommit-3" with payload:
Then I send HTTP inference request with timeout "20s" to model "overcommit-3" with payload:
"""
{
"inputs": [
Expand Down
77 changes: 73 additions & 4 deletions tests/integration/godog/features/pipeline/conditional.feature
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
@PipelineDeployment @Functional @Pipelines @Conditional
@PipelineConditional @Functional @Pipelines @Conditional
Feature: Conditional pipeline with branching models
This pipeline uses a conditional model to route data to either add10 or mul10.
In order to support decision-based inference
As a model user
I need a conditional pipeline that directs inputs to one of multiple models based on a condition

Scenario: Deploy tfsimple-conditional pipeline and wait for readiness
Scenario: Deploy a conditional pipeline, run inference, and verify the output
Given I deploy model spec with timeout "30s":
"""
apiVersion: mlops.seldon.io/v1alpha1
Expand Down Expand Up @@ -43,7 +45,7 @@ Feature: Conditional pipeline with branching models
And the model "add10-nbsl" should eventually become Ready with timeout "20s"
And the model "mul10-nbsl" should eventually become Ready with timeout "20s"

And I deploy pipeline spec with timeout "30s":
When I deploy a pipeline spec with timeout "30s":
"""
apiVersion: mlops.seldon.io/v1alpha1
kind: Pipeline
Expand All @@ -69,3 +71,70 @@ Feature: Conditional pipeline with branching models
stepsJoin: any
"""
Then the pipeline "tfsimple-conditional-nbsl" should eventually become Ready with timeout "40s"
Then I send gRPC inference request with timeout "20s" to pipeline "tfsimple-conditional-nbsl" with payload:
"""
{
"model_name": "conditional-nbsl",
"inputs": [
{
"name": "CHOICE",
"contents": {
"int_contents": [
0
]
},
"datatype": "INT32",
"shape": [
1
]
},
{
"name": "INPUT0",
"contents": {
"fp32_contents": [
1,
2,
3,
4
]
},
"datatype": "FP32",
"shape": [
4
]
},
{
"name": "INPUT1",
"contents": {
"fp32_contents": [
1,
2,
3,
4
]
},
"datatype": "FP32",
"shape": [
4
]
}
]
}
"""
And expect gRPC response body to contain JSON:
"""
{
"outputs": [
{
"name": "OUTPUT",
"datatype": "FP32",
"shape": [
4
]
}
],
"raw_output_contents": [
"AAAgQQAAoEEAAPBBAAAgQg=="
]
}
"""
Loading
Loading