Skip to content

Allow log_metrics to be used as a context manager for testing subset functions #1227

@offbyone

Description

@offbyone

Use case

I am attempting to write a unit test for a piece of functionality that isn't in the top of the lambda function call stack, but is emitting metrics. Moreover, the functionality is an async coroutine, so it's not amenable to being wrapped directly with the @metrics.log_metrics() decorator.

What I want is to be able to do this:

@pytest.mark.asyncio
async def test_sub_functionality(capsys):
    metrics = Metrics(namespace="ThisTest")

    # this is the new bit:
    with metrics.log_metrics():
        await fixture.awaitable_thing()

    log = capsys.readouterr().out.strip()  # remove any extra line
    metrics_output = json.loads(log)  # deserialize JSON str
   assert "TheMetric" in metrics_output["_aws"]["CloudWatchMetrics"][0]["Metrics"][0]["Name"]

Solution/User Experience

Provide a context manager facade for the log_metrics decorator; either by factoring out the contents of that method into a context manager directly, or by implementing it in a different way.

Alternative solutions

Alternately, if `log_metrics()` worked with arbitrary functions (and coroutines) instead of being limited to applying to functions that match the Lambda function interface, I could probably manage using it in my tests more easily.

Acknowledgment

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    Status

    Closed

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions