| outline | 2 |
|---|---|
| uacp | This page is linked from the Help Portal at https://help.sap.com/products/BTP/65de2977205c403bbc107264b8eccf4b/d2ee648522044ea19d3b5126c29692b5.html |
Support Channels & Troubleshooting FAQs {.subtitle}
| To... | External |
|---|---|
| Ask Questions / Get Answers | SAP Community |
| Create issues / bug reports | SAP Support Portal |
| File feature requests | SAP Influence Portal |
Tip
If you encounter issues, check the Troubleshooting FAQs below before posting questions or creating issues in the support channels.
[[toc]]
To start VS Code via the code CLI, users on macOS must first run a command (Shell Command: Install 'code' command in PATH) to add the VS Code executable to the PATH environment variable. Read VS Code's macOS setup guide for help.
Run the latest LTS version of Node.js (even numbers: 20, 22, 24). Avoid odd versions, as some modules with native parts may not install. Check version with:
node -vIf you encounter an error like "Node.js v1... or higher is required for @sap/cds ...." on server startup, upgrade to the indicated version at the minimum, or even better, the most recent LTS version.
For Cloud Foundry, use the engines field in package.json.
Learn more about the release schedule of Node.js.{.learn-more} Learn about ways to install Node.js.{.learn-more}
If you get error messages like Error: EACCES: permission denied, mkdir '/usr/local/...' when installing a global module like @sap/cds-dk, configure npm to use a different directory for global modules:
mkdir ~/.npm-global ; npm set prefix '~/.npm-global'
export PATH=~/.npm-global/bin:$PATHAlso add the last line to your user profile, for example, ~/.profile, so that future shell sessions have changed PATH as well.
Learn more about other ways to handle this error.{.learn-more}
Global npm installations are stored in a user-specific directory on your machine. On Windows, this directory usually is:
C:\Users\<your-username>\AppData\Roaming\npmVerify that your PATH-environment variable contains this path.
In addition, set the variable NODE_PATH to:
C:\Users\<your-username>\AppData\Roaming\npm\node_modules.
-
Design time tools like
cds init:Install and update
@sap/cds-dkglobally usingnpm i -g @sap/cds-dk. -
Node.js runtime:
Maintain the version of
@sap/cdsin the top-level package.json of your application in thedependenciessection.Learn more about recommendations on how to manage Node.js dependencies.{.learn-more}
-
CAP Java SDK:
Maintain the version in the pom.xml of your Java module, which is located in the root folder. In this file, modify the property
cds.services.version.
By default, CAP Node.js servers listen on port 4004, which might be occupied if other CAP servers are running in parallel. In this case, cds watch offers to pick a different port.
cds watch...
EADDRINUSE - port 4004 is already in use by another server process.
Press Return to restart with an arbitrary port.
...Ports can be explicitly set with the PORT environment variable, the cds.server.port = 4005 config option, or the --port argument to cds serve and cds watch; see cds help watch for more.
Node.js allows extending existing services, for example in mashup scenarios. This is commonly done on bootstrap time in cds.on('served', ...) handlers like so:
cds.on('served', ()=>{
const { db } = cds.services
db.on('before',(req)=> console.log(req.event, req.path))
})It is important to note that by Node.js emit are synchronous operations, so, avoid any await operations in there, as that might lead to race conditions. In particular, when registering additional event handlers with a service, as shown in the snippet above, this could lead to very hard to detect and resolve issues with handler registrations. So, for example, don't do this:
cds.on('served', async ()=>{
const db = await cds.connect.to('db') // DANGER: will cause race condition !!!
db.on('before',(req)=> console.log(req.event, req.path))
})Requirements:
- App start script is
cds-serve(notnpx cds run) - Dependency
@dynatrace/oneagent-sdkis in package.json
... with error messages like these:
- Acquiring client from pool timed out
- ResourceRequest timed out
Verify that the SAP HANA database is accessible in your application's environment. This includes verifying the SAP HANA is either part of or mapped to your Cloud Foundry space or Kyma cluster and the IP addresses are in an allowed range. Connectivity issues are likely the root cause if you experience this error during application startup.
Learn how to set up SAP HANA instance mappings{.learn-more style="margin-top:10px"}
If you frequently get this error during normal runtime operation your database client pool settings likely don't match the application's requirements. There are two possible root causes:
| Explanation | |
|---|---|
| Root Cause 1 | The maximum number of database clients in the pool is reached and additional requests wait too long for the next client. |
| Root Cause 2 | The creation of a new connection to the database takes too long. |
| Solution | Adapt max or acquireTimeoutMillis with more appropriate values, according to the documentation. |
Ensure that database transactions are either committed or rolled back. This can work in two ways:
- Couple it to your request (this happens automatically): Once the request is succeeded, the database service commits the transaction. If there was an error in one of the handlers, the database service performs a rollback.
- For manual transactions (for example, by writing
const tx = cds.tx()), you need to perform the commit/rollback yourself:await tx.commit()/await tx.rollback().
If you're using @sap/hana-client, verify that the environment variable HDB_NODEJS_THREADPOOL_SIZE is adjusted appropriately. This variable specifies the amount of workers that concurrently execute asynchronous method calls for different connections.
| Explanation | |
|---|---|
| Root Cause | 431 occurs when the size of the request headers exceeds the maximum limit configured in the Node.js HTTP server. In this case, the Node.js HTTP server rejects the request during the initial parsing phase before it reaches CAP. Therefore, the request is not logged by the application. |
| Solution | Inspect the request headers and check their size. If large headers are required and cannot be reduced, increase the maximum allowed HTTP header size in Node.js by setting the following environment variable NODE_OPTIONS="--max-http-header-size=65536" |
... and do not even seem to reach the application?
If you have long running requests, you may experience intermittent 502 errors that are characterized by being logged by the platform's router, but not by your CAP application.
In most cases, this behavior is caused by the server having just closed the TCP connection without waiting for acknowledgement, so that the platform's load balancer still considers it open and uses it to forward the request.
The issue is discussed in detail in this blog post by Adam Crowder.
One solution is to increase the server's keepAliveTimeout to above that of the respective load balancer.
The following example shows how to set keepAliveTimeout on the http.Server created by CAP.
const cds = require('@sap/cds')
cds.once('listening', ({ server }) => {
server.keepAliveTimeout = 3 * 60 * 1000 // > 3 mins
})
module.exports = cds.serverWatch the video to learn more about Best Practices for CAP Node.js Apps.{.learn-more}
... mostly after 30 seconds, even though the application continues processing the request?
| Explanation | |
|---|---|
| Root Cause | Most probably, this error is caused by the destination timeout of the App Router. |
| Solution | Set your own timeout configuration of @sap/approuter. |
| Explanation | |
|---|---|
| Root Cause | Most probably, the service name in the requires section does not match the served service definition. |
| Solution | Set the .service property in the respective requires entry. See cds.connect() for more details. |
| Explanation | |
|---|---|
| Root Cause | The destination, the remote system or the request details are not configured correctly. |
| Solution | To further troubleshoot the root cause, you can enable logging with environment variables SAP_CLOUD_SDK_LOG_LEVEL=silly and DEBUG=remote. |
| Explanation | |
|---|---|
| Root Cause | If the application has a service binding with the same name as the requested destination, the SAP Cloud SDK prioritizes the service binding. This service has different endpoints than the originally targeted remote service. For more information, refer to the SAP Cloud SDK documentation. |
| Solution | Use different names for the service binding and the destination. |
| Explanation | |
|---|---|
| Root Cause 1 | The package @cap-js/cds-types is not installed. |
| Solution 1 | Install the package as a dev dependency. |
| Root Cause 2 | Symlink is missing. |
| Solution 2 | Try npm rebuild or add @cap-js/cds-types in your tsconfig.json. |
Install type definitions by adding the typescript facet:
::: code-group
cds add typescriptnpm i -D @cap-js/cds-types:::
Installing @cap-js/cds-types leverages VS Code's automatic type resolution mechanism by symlinking the package in node_modules/@types/sap__cds in a postinstall script. If you find that this symlink is missing, try npm rebuild to trigger the postinstall script again.
If the symlink doesn't persist, explicitly configure tsconfig.json:
::: code-group
{
"compilerOptions": {
"types": ["@cap-js/cds-types"],
}
}:::
For incomplete types, report issues in the @cap-js/cds-types repository.
If you get this error (for example, when building MTX resources), install the tar library for better Windows compatibility:
npm add -D tarOn macOS and Linux, the built-in implementation continues to be used.
Use privilegedUser() when defining your own RequestContext. This introduces a user that passes all authorization restrictions. Useful when calling a restricted service through the local service consumption API regardless of the original user's authorizations or in a background thread.
| Explanation | |
|---|---|
| Root Cause | You've explicitly configured a mock user with a name that is already used by a preconfigured mock user. |
| Solution | Rename the mock user and build your project again. |
There could be a mismatch between your locally installed Node.js version and the version that is used by the cds-maven-plugin. The result is an error similar to the following:
❗️ ERROR on server start: ❗️
Error: The module '/home/user/....node'
was compiled against a different Node.js version usingTo fix this, either switch the Node.js version using a Node version manager, or add the Node version to your pom.xml as follows:
<properties>
<!-- ... -->
<cds.install-node.nodeVersion>v20.11.0</cds.install-node.nodeVersion>
<!-- ... -->
</properties>
Learn more about the install-node goal.{.learn-more target="_blank"}
To expose additional REST APIs not covered by CAP's protocol adapters (for example, OData V4), implement your own Spring Web MVC RestController. Common examples include CSV file uploads or custom REST endpoints.
Your RestController can fully leverage CAP Java APIs. You'll typically interact with services and the database through the local service consumption API. Learn more: Spring docs, Spring Boot docs, and this tutorial.
The project skeleton generated by the CAP Java archetype adds the relevant Spring Boot and CAP Java dependencies, so that SQL database is supported by default. However, using an SQL database in CAP Java is fully optional. You can also develop CAP applications that don't use persistence at all. To remove the SQL database support, you need to exclude the JDBC-related dependencies of Spring Boot and CAP Java. This means that CAP Java won't create a Persistence Service instance.
::: tip Default Application Service event handlers delegate to Persistence Service You need to implement your own custom handlers in case you remove the SQL database support. :::
You can exclude those dependencies from the cds-starter-spring-boot dependency in the srv/pom.xml:
<dependency>
<groupId>com.sap.cds</groupId>
<artifactId>cds-starter-spring-boot</artifactId>
<exclusions>
<exclusion>
<groupId>com.sap.cds</groupId>
<artifactId>cds-feature-jdbc</artifactId>
</exclusion>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jdbc</artifactId>
</exclusion>
</exclusions>
</dependency>In addition you might want to remove the H2 dependency, which is included in the srv/pom.xml by default as well.
If you don't want to exclude dependencies completely, but make sure that an in-memory H2 database isn't used, you can disable Spring Boot's DataSource auto-configuration, by annotating the Application.java class with @SpringBootApplication(exclude = org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration.class). In that mode CAP Java however can still react on explicit data source configurations or database bindings.
- In Problems view, execute Quick fix from the context menu if available. If Eclipse asks you to install additional Maven Eclipse plug-ins to overcome the error, do so.
- Errors like 'Plugin execution not covered by lifecycle configuration: org.codehaus.mojo:exec-maven-plugin) can be ignored. Do so in Problems view > Quick fix context menu > Mark goal as ignored in Eclipse preferences.
- In case, there are still errors in the project, use Maven > Update Project... from the project's context menu.
If your application(s) endpoints are served with OData and you want to change the standard HTML response to an OData response, adapt the following snippet to your needs and add it in your custom server.js file.
let app
cds.on('bootstrap', a => {
app = a
})
cds.on('served', () => {
app.use((req, res, next) => {
// > unhandled request
res.status(404).json({ message: 'Not Found' })
})
})The annotation @odata.draft.enabled is very specific to SAP Fiori elements, only some requests are allowed.
For example it's forbidden to freely add IsActiveEntity to $filter, $orderby and other query options.
The technical reason for that is that active instances and drafts are stored in two different database tables.
Mixing them together is not trivial, therefore only some special cases are supported.
-
From the SQLite page, download the precompiled binaries for Windows
sqlite-tools-win*.zip. -
Create a folder C:\sqlite and unzip the downloaded file in this folder to get the file
sqlite3.exe. -
Start using SQLite directly by opening
sqlite3.exefrom the folder sqlite or from the command line window opened in C:\sqlite. -
Optional: Add C:\sqlite in your PATH environment variable. As soon as the configuration is active, you can start using SQLite from every location on your Windows installation.
-
Use the command sqlite3 to connect to the in-memory database:
C:\sqlite>sqlite3
SQLite version ...
Enter ".help" for instructions
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite>If you want to test further, use .help command to see all available commands in sqlite3.
In case you want a visual interface tool to work with SQLite, you can use SQLite Viewer. It's available as an extension for VS Code and integrated in SAP Business Application Studio.
To configure this service in the SAP BTP cockpit on trial, refer to the SAP HANA Cloud Onboarding Guide. See SAP HANA Cloud documentation or visit the SAP HANA Cloud community for more details.
::: warning HANA needs to be restarted on trial accounts On trial, your SAP HANA Cloud instance will be automatically stopped overnight, according to the server region time zone. That means you need to restart your instance every day before you start working with your trial. :::
Learn more about SAP HANA Cloud trying out tutorials in the Tutorial Navigator.{.learn-more}
| Explanation | |
|---|---|
| Root Cause | This is a known issue with older HDI/HANA versions, which are offered on trial landscapes. |
| Solution | Apply the workaround of adding --treat-unmodified-as-modified as argument to the hdi-deploy command in db/package.json. This option redeploys files, even if they haven't changed. If you're the owner of the SAP HANA installation, ask for an upgrade of the SAP HANA instance. |
| Explanation | |
|---|---|
| Root Cause | An error like Version incompatibility for the ... build plugin: "2.0.x" (installed) is incompatible with "2.0.y" (requested) indicates that your project demands a higher version of SAP HANA than what is available in your org/space on SAP BTP, Cloud Foundry environment. The error might not occur on other landscapes for the same project. |
| Solution | Lower the version in file db/src/.hdiconfig to the one given in the error message. If you're the owner of the SAP HANA installation, ask for an upgrade of the SAP HANA instance. |
- Could not connect to any host... - unable to get local issuer certificate
- MTX sidecar crashes with HTTP error 429 (Too Many Requests)
| Explanation | |
|---|---|
| Root Cause | A change of SAP's root certificate from DigiCert Global Root CA to DigiCert TLS RSA4096 Root G5 leads to deployment failures because older certificates get rejected by too old SAP HANA driver versions and/or older service bindings in SAP HANA Cloud. |
| Solution | For Node.js applications, update the hdb driver to the latest version. See SAP note 3397584 for details. See the SAP HANA blog post for the broader context. |
| Explanation | |
|---|---|
| Root Cause | If you deploy to SAP HANA from a local Windows machine, this error might occur if the SAP CommonCryptoLib isn't installed on this machine. |
| Solution | To install it, follow these instructions. If this doesn't solve the problem, also set the environment variables as described here. |
- Failed to get connection for database
- Connection failed (RTE:[300015] SSL certificate validation failed
- Cannot create SSL engine: Received invalid SSL Record Header
| Explanation | |
|---|---|
| Root Cause | Your SAP HANA Cloud instance is stopped. |
| Solution | Start your SAP HANA Cloud instance. |
| Explanation | |
|---|---|
| Root Cause | The @sap/hana-client can't verify the certificate because of missing system toolchain dependencies. |
| Solution | Make sure ca-certificates is installed on your Docker container. |
| Explanation | |
|---|---|
| Root Cause | Your SAP HANA Cloud instance is stopped. |
| Solution | Start your SAP HANA Cloud instance. |
| Explanation | |
|---|---|
| Root Cause | Your configuration isn't properly set. |
| Solution | Configure your project as described in Using Databases. |
| Explanation | |
|---|---|
| Root Cause | Your IP isn't part of the filtering you configured when you created an SAP HANA Cloud instance. This error can also happen if you exceed the maximum number of simultaneous connections to SAP HANA Cloud (1000). |
| Solution | Configure your SAP HANA Cloud instance to accept your IP. If configured correctly, check if the number of database connections are exceeded. Make sure your pool configuration does not allow more than 1000 connections. |
| Explanation | |
|---|---|
| Root Cause | Your project configuration is missing some configuration in your .hdiconfig file. |
| Solution | Use cds add hana to add the needed configuration to your project. Or maintain the hdbmigrationtable plugin in your .hdiconfig file manually: "hdbmigrationtable": { "plugin_name": "com.sap.hana.di.table.migration" } |
Deployment fails — In USING declarations only main artifacts can be accessed, not sub artifacts of <name>
This error occurs if all of the following applies:
- You added native SAP HANA objects to your CAP model.
- You used deploy format
hdbcds. - You didn't use the default naming mode
plain.
| Explanation | |
|---|---|
| Root Cause | The name/prefix of the native SAP HANA object collides with a name/prefix in the CAP CDS model. |
| Solution | Change the name of the native SAP HANA object so that it doesn't start with the name given in the error message and doesn't start with any other prefix that occurs in the CAP CDS model. If you can't change the name of the SAP HANA object, because it already exists, define a synonym for the object. The name of the synonym must follow the naming rule to avoid collisions (root cause). |
| Explanation | |
|---|---|
| Root Cause | SAP HANA still claims exclusive ownership of the data that was once deployed through hdbtabledata artifacts, even though the CSV files are now deleted in your project. |
| Solution | Add an undeploy.json file to the root of your database module (the db folder by default). This file defines the files and data to be deleted. See section HDI Delta Deployment and Undeploy Allow List for more details. |
If you want to keep the data from .csv files and data you've already added, apply SAP Note 2922271.
Depending on whether you have a single-tenant or multi-tenant application, see the following details for how to set the path_parameter and undeploy parameters:
:::details Single-tenant applications {open}
Use the db/undeploy.json file as given in the SAP note. The package.json file that is mentioned in the SAP note is located in the db/ folder.
- If you don't find a db/package.json file, use gen/db/package.json (created by
cds build) as a template and copy it to db/package.json. - After the modification, run
cds build --productionand verify your changes have been copied to gen/db/package.json. - Don't modify gen/db/package.json as it is overwritten on every build.
:::
:::details Multi-tenant applications
Instead of configuring the static deployer application in db/package.json, use environment variable HDI_DEPLOY_OPTIONS, the cds configuration in package.json, or add the options to the model update request as hdi parameter:
CDS configuration for Deployment Service
"cds.xt.DeploymentService": {
"hdi": {
"deploy": {
"undeploy": [
"src/gen/data/my.bookshop-Books.hdbtabledata"
],
"path_parameter": {
"src/gen/data/my.bookshop-Books.hdbtabledata:skip_data_deletion": "true"
}
},
...
}
}Options in Saas Provisioning Service upgrade API call payload
{
"tenants": ["*"],
"_": {
"hdi": {
"deploy": {
"undeploy": [
"src/gen/data/my.bookshop-Books.hdbtabledata"
],
"path_parameter": {
"src/gen/data/my.bookshop-Books.hdbtabledata:skip_data_deletion": "true"
}
}
}
}
}:::
After you have successfully deployed these changes to all affected HDI (tenant) containers (in all spaces, accounts etc.), you can remove the configuration again.
The cds runtime sets the session variable APPLICATIONUSER. This should always reflect the logged in user.
Do not use a XS_ prefix.
In this case, the process was killed by a SIGKILL signal, typically because it exceeded its resource limits, for example memory or CPU, causing the container platform to terminate it.
::: tip Distinguish extensibility and non-extensibility scenarios While out-of-memory issues are more common, with extensibility enabled you’re more likely to run into CPU bottlenecks due to expensive compilations that need to be performed at (MTX) runtime. :::
MTX uses four parallel workers by default to perform tenant upgrades. If your project exceeds a certain complexity threshold, you might run into these resource bottlenecks. We advise you to follow this algorithm to mitigate resource overload:
-
Decrease your model complexity: Ask yourself, is your current domain model a good compression of your business domain? Decreasing complexity here will have positive trickle-down effects, including tenant upgrade performance.
-
Increase resources (scale up): Increase the RAM assigned to your MTX sidecar or upgrade task. This is typically done in deployment resources like mta.yaml (Cloud Foundry) or values.yaml (Kyma).
Learn more about database upgrade task configuration{.learn-more}
::: info In Cloud Foundry, CPU shares scale with memory As there is no way to increase CPU independently from memory, your memory configuration might be a bottleneck even if the process is killed due to CPU spikes. :::
-
Decrease workers in async MTX operations: When scaling up resources is no longer feasible, you can run with fewer parallel migrations:
This won't affect application runtime performance.
-
Increase the number of MTX sidecars (scale out): To compensate for eventual performance losses from 3., distribute the work across multiple sidecars.
The deployment logs are part of the application logs. To avoid problems with the logging infrastructure, the default detail level of the deployment logs is limited to logs printed to stderr. To get more details, you need to increase the log level by setting the environment variable DEBUG=deploy.
This message indicates that extensions exist, but the application is not configured for extensibility. To avoid accidental data loss from removing existing extensions from the database, the upgrade is blocked.
::: danger If data loss is acceptable
cds.requires.['cds.xt.DeploymentService'].upgrade.skipExtensionCheck = true in your CDS configuration enables you to skip this check.
:::
See How to configure your App Router to verify your setup.
Find the documentation on cds login{.learn-more}
For a start, create your Trial Account.
If mbt build fails with The 'npm ci' command can only install with an existing package-lock.json, this means that such a file is missing in your project.
- Create the package-lock.json file with a regular
npm updatecommand. - If the file was not created, make sure to enable it with
npm config set package-lock trueand repeat the previous command.
The package-lock.json should be added to version control. Make sure that .gitignore does not contain it.
The purpose of package-lock.json is to pin your project's dependencies to allow for reproducible builds.
Learn more about dependency management in Node.js.{.learn-more}
- Make sure to use the latest version of the Cloud MTA Build Tool (MBT).
- Consult the Cloud MTA Build Tool documentation for further information, for example, on the available tool options.
By default, the Cloud MTA Build Tool executes module builds in parallel. If you want to enforce a specific build order, for example, because one module build relies on the outcome of another one, check the Configuring build order section in the tool documentation.
cf undeploy <mta-id> deletes an MTA (use cf mtas to find the MTA ID).
Use --delete-services, --delete-service-keys and --delete-service-brokers parameters to also wipe services, service keys, or service brokers.
::: danger This also deletes the HDI containers with the application data. :::
You can reduce MTA archive sizes, and thereby speedup deployments, by omitting node_module folders.
First, add a file less.mtaext with the following content:
::: code-group
_schema-version: '3.1'
ID: bookshop-small
extends: capire.bookshop
modules:
- name: bookshop-srv
build-parameters:
ignore: ["node_modules/"]:::
Now you can build the archive with:
mbt build -t gen --mtar mta.tar -e less.mtaext::: warning Not recommended for production deployments
- For test deployments during development. For production deployments, self-contained archives are preferrable.
- If all your dependencies are available in public registries like npmjs.org or Maven Central. Dependencies from corporate registries are not resolvable in this mode. :::
You can use the Cloud Foundry CLI to retrieve recent logs:
cf logs <appname> --recent::: tip Stream logs to your terminal
If you omit the option --recent, you can run this command in parallel to your deployment and see the logs as they come in.
:::
This is a known issue on Windows. The fix is to set the HOMEDRIVE environment variable to C:. In any cmd shell session, you can do so with SET HOMEDRIVE=C:
Also, make sure to persist the variable for future sessions in the system preferences. See How do I set my system variables in Windows for more details.
This is the same issue as with the installation error above.
If on deployment to Cloud Foundry, a module crashes with the error message Cannot mkdir: No space left on device then the solution is to adjust the space available to that module in the mta.yaml file. Adjust the disk-quota parameter.
parameters:
disk-quota: 512M
memory: 256MLearn more about this error in KBA 3310683{.learn-more}
In order to send a request to an app, it must be associated with a route. Please see Cloud Foundry Documentation -> Routes for details. As this is done automatically by default, the process is mostly transparent for developers.
If you receive an error response 404 Not Found: Requested route ('<route>') does not exist, this can have two reasons:
-
The route really does not exist or is not bound to an app. You can check this in SAP BTP cockpit either in the app details view or in the list of routes in the Cloud Foundry space.
-
The app (or all app instances, in case of horizontal scale-out) failed the readiness check. Please see Health Checks and Using Cloud Foundry health checks for details on how to set up the check.
::: details Troubleshoot using the Cloud Foundry CLI
cf apps # -> list all apps cf app <your app name> # -> get details on your app, incl. state and routes cf app <your app name> --guid # -> get your app's guid cf curl "/v3/processes/<your app guid>/stats" # -> list of processes (one per app instance) with property "routable" # indicating whether the most recent readiness check was successful
See cf curl and The process stats object for details on how to use the CLI.
:::
For security reasons, the index page is not served in production in Node.js and Java.
If you try to access your backend URL, you will therefore see a 404 Cannot GET / error.
::: warning This also means you cannot use the / path as a health status indicator.
See the Health Checks guide for the correct paths.
:::
Only if absolutely required and you understand the security implications to your application, you can enable this page in your deployment.
Learn more about enabling generic index page in Java and in Node.js.{.learn-more}
Run npm i --package-lock-only to update the package-lock.json and re-run cds up.