Skip to content

Unit/integration testing durable functionsΒ #460

@paulschroeder-tomtom

Description

@paulschroeder-tomtom

πŸ’‘ Feature description
We were using durable functions to create an API (what we call the orchestration API - oAPI). This one processes requests and (HTTP-) calls other (primitive - our terminology) APIs (pAPI). We have created a durable function (DF) app and registered several functions via decorators for the different routes. I.e.

import azure.durable_functions as dfapp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS)

app.register_blueprint(projects_orchestrators)
app.register_blueprint(landing_page_bp)# /projects

@app.route(route="projects")
@app.function_name(name="projects")
@app.durable_client_input(client_name="client")
async def projects(req: func.HttpRequest, context: func.Context, client) -> func.HttpResponse:
    ...
    instance_id = await client.start_new("list_projects_orchestrator", client_input=client_input)
    ...
    retrun client.create_check_status_response(req, instance_id)

@projects_orchestrators.orchestration_trigger(context_name="context")
def list_projects_orchestrator(context: df.DurableOrchestrationContext):
    ...
    # the real magic happens here
    ...

πŸ’­ Describe alternatives you've considered
What we now would like to do now, is to (integration) test the whole setup and treat the DF app like a black box. So our test should look like so:

@pytest.mark.asyncio
async def test_projects():
    # mock pAPIs
    httpretty.register_uri(httpretty.GET, 'https://papi1.com/foo', body=json.dumps({'data': 123}))
    httpretty.register_uri(httpretty.GET, 'https://papi2.com/bar', body=json.dumps({'data': 456}))

    request = func.HttpRequest(...) 
    
    # pseudocode code
    from function_app import projects as function_under_test
    result = await function_under_test(req=request, context, client)
    # /pseudocode code 

    assert result == ...

We were already doing quite some tinkering to even get the equivalent to result = await function_under_test(req=request, context, client) working. But everything is quite fragile and is still failing because, as we guess, stuff that is supplied by the runtime is still missing.

We also would really like to avoid any shell command calls to bring up the runtime and prefer to have everything in python to easily mock external dependencies (we are also able to emulate the storage via azurite).

So, our burning questions are:

  • is the way we are trying to go a dead end?
  • how do we correctly call the functions like project from above?
  • how do we correctly mock the inputs like context and client of projects?
  • is it even possible to bring up the runtime purely in python/pytest?
  • if so, how?

Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    Best PracticeMissing best practice guidance; unclear what best practice is.P1Priority 1

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions