Loading transformer on AWS Lambda throws OMP errno 38 #11836
-
We have trained a Spacy 3.3.1 transformer textcat which we're deploying as an AWS Python 3.9 Docker image to AWS Lambda. The model loads and infers correctly on the Linux development host (both using a test Python script and also using AWS SAM local), but fails in the Lambda runtime with OpenMP runtime error #38 (see Lambda error output below). A search suggests this error occurs because Lambda doesn't support Python multiprocessing, specifically it doesn't mount /dev/shm, leading to the error. Loading a blank Spacy model inside the Lambda runtime works without a problem, indicating the error occurs only for this specific trained model or perhaps for transformer models more generally.
Lambda error output
Relevant links |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
spaCy shouldn't use multiprocessing directly unless you instruct it to, but some of the libraries we use might have multiprocessing under the hood. In particular, for spacy-transformers, HuggingFace Transformers is doing a lot of things and may be the source of the issue. To help you narrow down the cause, you could try using a non-transformer spaCy model and see if it has the same error. |
Beta Was this translation helpful? Give feedback.
spaCy shouldn't use multiprocessing directly unless you instruct it to, but some of the libraries we use might have multiprocessing under the hood. In particular, for spacy-transformers, HuggingFace Transformers is doing a lot of things and may be the source of the issue.
To help you narrow down the cause, you could try using a non-transformer spaCy model and see if it has the same error.