Skip to content

Problem: it is unclear when the tests are running NLP-related tests #17

@jrwdunham

Description

@jrwdunham

The functional tests are written in such a way that if foma and/or MITLM are not installed on the host, then certain tests will pass vacuously. On a TravisCI host, these OS deps are not installed. On a docker-compose local deploy, they should be installed. This is problematic if you want to know whether all of these tests are passing.

Proposed solution: add a strict flag to the test runner so that the test will fail if these foma/MITLM-dependent tests fail.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions