Skip to content

Commit 154d4c6

Browse files
updates
1 parent 8c8038d commit 154d4c6

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/container-apps/troubleshoot-container-start-failures.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ With the log clues in hand, examine the possible reasons your container fails to
5252
| Application crashes and the exit code aren't 0 | If the container exited with a nonzero exit code, then an unhandled exception or error in your application code is often the problem. Check the application logs for any exceptions or error messages right before the crash. Common causes include missing files or dependencies, runtime exceptions, or misconfigured frameworks. | Reproduce the issue locally if possible. Run the same image locally using Docker (`docker run --rm <IMAGE>`) to see if the application exits or errors out. Fix any application-level errors (for example, include all necessary dependencies in the image and handle all exceptions), then redeploy the container. |
5353
| Startup command and entrypoint | If you overwrote the startup command or if your Dockerfile’s entrypoint isn't launching the app correctly, the container could start and then immediately exit. Azure Container Apps by default run the container with the entrypoint/CMD as defined in the image. Verify that the image’s start command actually starts the intended service. For example, for a Node.js app, ensure the Docker CMD runs your server (and not just exits). | Double-check the Dockerfile’s entrypoint/CMD. If using a custom command in Azure (via YAML or CLI), ensure it’s correct. You can update the container app’s startup command via CLI or Azure portal. In the portal, go to your container, select **Edit container**, and then edit the command and args. |
5454
| Health probe failures | Azure Container Apps automatically adds a liveness and readiness probe if you enabled HTTP ingress and didn’t specify custom probes. A common scenario is the app is actually running, but the health check is failing, causing the platform to restart the container or mark the revision as unhealthy. For example, if your app listens on a nondefault port or takes longer to start, the default probe might fail. | Ensure the container’s target port matches the port your application listens on. If your app needs more time to initialize, configure a startup or liveness probe with a longer initial delay. In the Azure portal look under *Containers* > *Health probes*, or via CLI YAML.<br><br>If the app doesn’t serve HTTP (for example, a background worker), consider disabling ingress or using internal ingress so that no HTTP probe is enforced. Alternatively, turning off external ingress temporarily can stop the container from a container that continuously restarts. |
55-
| Scaler signal connectivity | If you’re using a scaling rule to scale your application, ensure that your application can connect to the source for the scaling signal. This message means that any database, event hub, or other container app needs to be reachable via network and accessible for query.| TODO |
55+
| Scaler signal connectivity | If you’re using a scaling rule to scale your application, ensure that your application can connect to the source for the scaling signal. This message means that any database, event hub, or other container app needs to be reachable via network and accessible for query.| Run a your container locally to verify the services are reachable in a development context first. |
5656
| Resource constraints (memory/CPU) | The Container Apps runtime can kill container that runs out of memory or CPU resources. Check system logs for out of memory (OOM) errors or throttling. | Compare your app’s resource needs with the limits configured for the Container App. |
5757

5858
## Verify environment variables and configuration

0 commit comments

Comments
 (0)