|
| 1 | +// Module included in the following assemblies |
| 2 | +// |
| 3 | +// * /serverless/serverless-release-notes.adoc |
| 4 | + |
| 5 | +[id="serverless-rn-1-20-0_{context}"] |
| 6 | += Release Notes for Red Hat {ServerlessProductName} 1.20.0 |
| 7 | + |
| 8 | +{ServerlessProductName} 1.20.0 is now available. New features, changes, and known issues that pertain to {ServerlessProductName} on {product-title} are included in this topic. |
| 9 | + |
| 10 | +[id="new-features-1-20-0_{context}"] |
| 11 | +== New features |
| 12 | + |
| 13 | +* {ServerlessProductName} now uses Knative Serving 0.26. |
| 14 | +* {ServerlessProductName} now uses Knative Eventing 0.26. |
| 15 | +* {ServerlessProductName} now uses Kourier 0.26. |
| 16 | +* {ServerlessProductName} now uses Knative `kn` CLI 0.26. |
| 17 | +* {ServerlessProductName} now uses Knative Kafka 0.26. |
| 18 | +* The `kn func` CLI plug-in now uses `func` 0.20. |
| 19 | + |
| 20 | +* The Kafka broker is now available as a Technology Preview. |
| 21 | +* The `kn event` plug-in is now available as a Technology Preview. |
| 22 | + |
| 23 | +[id="known-issues-1-20-0_{context}"] |
| 24 | +== Known issues |
| 25 | + |
| 26 | +* {ServerlessProductName} deploys Knative services with a default address that uses HTTPS. When sending an event to a resource inside the cluster, the sender does not have the cluster certificate authority (CA) configured. This causes event delivery to fail, unless the cluster uses globally accepted certificates. |
| 27 | ++ |
| 28 | +For example, an event delivery to a publicly accessible address works: |
| 29 | ++ |
| 30 | +[source,terminal] |
| 31 | +---- |
| 32 | +$ kn event send --to-url https://ce-api.foo.example.com/ |
| 33 | +---- |
| 34 | ++ |
| 35 | +On the other hand, this delivery fails if the service uses a public address with an HTTPS certificate issued by a custom CA: |
| 36 | ++ |
| 37 | +[source,terminal] |
| 38 | +---- |
| 39 | +$ kn event send --to Service:serving.knative.dev/v1:event-display |
| 40 | +---- |
| 41 | ++ |
| 42 | +Sending an event to other addressable objects, such as brokers or channels, is not affected by this issue and works as expected. |
| 43 | + |
| 44 | +* The Kafka broker currently does not work on a cluster with Federal Information Processing Standards (FIPS) mode enabled. |
| 45 | + |
| 46 | +* If you create a Springboot function project directory with the `kn func create` command, subsequent running of the `kn func build` command fails with this error message: |
| 47 | ++ |
| 48 | +[source,terminal] |
| 49 | +---- |
| 50 | +[analyzer] no stack metadata found at path '' |
| 51 | +[analyzer] ERROR: failed to : set API for buildpack 'paketo-buildpacks/[email protected]': buildpack API version '0.7' is incompatible with the lifecycle |
| 52 | +---- |
| 53 | ++ |
| 54 | +As a workaround, you can change the `builder` property to `gcr.io/paketo-buildpacks/builder:base` in the function configuration file `func.yaml`. |
| 55 | + |
| 56 | +* Deploying a function using the `gcr.io` registry fails with this error message: |
| 57 | ++ |
| 58 | +[source,terminal] |
| 59 | +---- |
| 60 | +Error: failed to get credentials: failed to verify credentials: status code: 404 |
| 61 | +---- |
| 62 | ++ |
| 63 | +As a workaround, use a different registry than `gcr.io`, such as `quay.io` or `docker.io`. |
| 64 | + |
| 65 | +* TypeScript functions created with the `http` template fail to deploy on the cluster. |
| 66 | ++ |
| 67 | +As a workaround, in the `func.yaml` file, replace the following section: |
| 68 | ++ |
| 69 | +[source,terminal] |
| 70 | +---- |
| 71 | +buildEnvs: [] |
| 72 | +---- |
| 73 | ++ |
| 74 | +with this: |
| 75 | ++ |
| 76 | +[source,terminal] |
| 77 | +---- |
| 78 | +buildEnvs: |
| 79 | +- name: BP_NODE_RUN_SCRIPTS |
| 80 | + value: build |
| 81 | +---- |
| 82 | + |
| 83 | +* In `func` version 0.20, some runtimes might be unable to build a function by using podman. You might see an error message similar to the following: |
| 84 | ++ |
| 85 | +[source,terminal] |
| 86 | +---- |
| 87 | +ERROR: failed to image: error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info": EOF |
| 88 | +---- |
| 89 | ++ |
| 90 | +** The following workaround exists for this issue: |
| 91 | + |
| 92 | +.. Update the podman service by adding `--time=0` to the service `ExecStart` definition: |
| 93 | ++ |
| 94 | +.Example service configuration |
| 95 | +[source,terminal] |
| 96 | +---- |
| 97 | +ExecStart=/usr/bin/podman $LOGGING system service --time=0 |
| 98 | +---- |
| 99 | +.. Restart the podman service by running the following commands: |
| 100 | ++ |
| 101 | +[source,terminal] |
| 102 | +---- |
| 103 | +$ systemctl --user daemon-reload |
| 104 | +---- |
| 105 | ++ |
| 106 | +[source,terminal] |
| 107 | +---- |
| 108 | +$ systemctl restart --user podman.socket |
| 109 | +---- |
| 110 | + |
| 111 | +** Alternatively, you can expose the podman API by using TCP: |
| 112 | ++ |
| 113 | +[source,terminal] |
| 114 | +---- |
| 115 | +$ podman system service --time=0 tcp:127.0.0.1:5534 & |
| 116 | +export DOCKER_HOST=tcp://127.0.0.1:5534 |
| 117 | +---- |
0 commit comments