diff --git a/apps/docs/components/Navigation/NavigationMenu/NavigationMenu.constants.ts b/apps/docs/components/Navigation/NavigationMenu/NavigationMenu.constants.ts
index df0dfca94eef2..e507e711b1467 100644
--- a/apps/docs/components/Navigation/NavigationMenu/NavigationMenu.constants.ts
+++ b/apps/docs/components/Navigation/NavigationMenu/NavigationMenu.constants.ts
@@ -1586,6 +1586,7 @@ export const realtime: NavMenuConstant = {
name: 'Postgres Changes',
url: '/guides/realtime/postgres-changes',
},
+ { name: 'Settings', url: '/guides/realtime/settings' },
],
},
{
@@ -1622,7 +1623,7 @@ export const realtime: NavMenuConstant = {
{ name: 'Quotas', url: '/guides/realtime/quotas' },
{ name: 'Pricing', url: '/guides/realtime/pricing' },
{ name: 'Architecture', url: '/guides/realtime/architecture' },
- { name: 'Message Protocol', url: '/guides/realtime/protocol', items: [] },
+ { name: 'Protocol', url: '/guides/realtime/protocol', items: [] },
{ name: 'Benchmarks', url: '/guides/realtime/benchmarks' },
],
},
diff --git a/apps/docs/content/guides/api/api-keys.mdx b/apps/docs/content/guides/api/api-keys.mdx
index 5b26b8ca72ea1..76b76f0953c48 100644
--- a/apps/docs/content/guides/api/api-keys.mdx
+++ b/apps/docs/content/guides/api/api-keys.mdx
@@ -156,6 +156,10 @@ If you know or suspect that the JWT secret itself is leaked, refer to the sectio
If the JWT secret is secure, prefer substituting the `service_role` JWT-based key with a new secret key which you can create in the [API Keys](/dashboard/project/_/settings/api-keys/new) dashboard. This will prevent downtime for your application.
+### Can I still use my old `anon` and `service-role` API keys after enabling the publishable and secret keys?
+
+Yes. This allows you to transition between the API keys with zero downtime by gradually swapping your clients while both sets of keys are active. See the next question for how to deactivate your keys once all your clients are switched over.
+
### How do I deactivate the `anon` and `service_role` JWT-based API keys after moving to publishable and secret keys?
You can do this in the [API Keys](/dashboard/project/_/settings/api-keys/new) dashboard. To prevent downtime in your application's components, use the last used indicators on the page to confirm that these are no longer used before deactivating.
diff --git a/apps/docs/content/guides/realtime/architecture.mdx b/apps/docs/content/guides/realtime/architecture.mdx
index 29f7a9b37e6e9..92881e3ae7b02 100644
--- a/apps/docs/content/guides/realtime/architecture.mdx
+++ b/apps/docs/content/guides/realtime/architecture.mdx
@@ -9,7 +9,13 @@ Realtime is a globally distributed Elixir cluster. Clients can connect to any no
Realtime is written in [Elixir](https://elixir-lang.org/), which compiles to [Erlang](https://www.erlang.org/), and utilizes many tools the [Phoenix Framework](https://www.phoenixframework.org/) provides out of the box.
-
+
## Elixir & Phoenix
diff --git a/apps/docs/content/guides/realtime/concepts.mdx b/apps/docs/content/guides/realtime/concepts.mdx
index 79ea841817d6e..2cd9f37dac4d2 100644
--- a/apps/docs/content/guides/realtime/concepts.mdx
+++ b/apps/docs/content/guides/realtime/concepts.mdx
@@ -107,32 +107,6 @@ Anyone with access to a valid JWT signed with the project's JWT secret is able t
Clients can choose to receive `INSERT`, `UPDATE`, `DELETE`, or `*` (all) changes for all changes in a schema, a table in a schema, or a column's value in a table. Your clients should only listen to tables in the `public` schema and you must first enable the tables you want your clients to listen to.
-## Settings
-
-
-
-Realtime settings are currently under Feature Preview section in the dashboard.
-
-
-
-
-
-You can set the following settings using the Realtime Settings screen in your Dashboard:
-
-- Channel Restrictions: You can toggle this settings to set Realtime to allow public channels or set it to use only private channels with [Realtime Authorization](/docs/content/guides/realtime/authorization).
-- Database connection pool size: Determines the number of connections used for Realtime Authorization RLS checking
- {/* supa-mdx-lint-disable-next-line Rule004ExcludeWords */}
-- Max concurrent clients: Determines the maximum number of clients that can be connected
-
-All changes made in this screen will disconnect all your connected clients to ensure Realtime starts with the appropriate settings and all changes are stored in Supabase middleware.
-
## Choosing between Broadcast and Postgres Changes for database changes
We recommend using Broadcast by default and using Broadcast from Database specifically as it will allow you to scale your application compared to Postgres Changes.
diff --git a/apps/docs/content/guides/realtime/protocol.mdx b/apps/docs/content/guides/realtime/protocol.mdx
index a4a4d352936f5..fbb83572eb9fd 100644
--- a/apps/docs/content/guides/realtime/protocol.mdx
+++ b/apps/docs/content/guides/realtime/protocol.mdx
@@ -4,187 +4,315 @@ title: 'Realtime Protocol'
description: 'Understanding Realtime Protocol'
---
-The Realtime Protocol is a set of message formats used for communication over a WebSocket connection between a Realtime client and server. These messages are used to initiate a connection, update access tokens, receive system status updates, and receive real-time updates from the Postgres database.
+## WebSocket connection setup
-## Connection
+To start the connection we use the WebSocket URL, which for:
-In the initial message, the client sends a message specifying the features they want to use (Broadcast, Presence, Postgres Changes).
+- Supabase projects: `wss://.supabase.co/realtime/v1/websocket?apikey=`
+- self-hosted projects: `wss://:/socket/websocket?apikey=`
+
+{/* supa-mdx-lint-disable-next-line Rule003Spelling */}
+As an example, using the [websocat](https://github.com/vi/websocat), you would run the following command in your terminal:
+
+```bash
+# With Supabase
+websocat "wss://.supabase.co/realtime/v1/websocket?apikey="
+
+# With self-hosted
+websocat "wss://:/socket/websocket?apikey="
+```
+
+During this stage you can also set other URL params:
+
+- `log_level`: sets the log level to be used by this connection to help you debug potential issues
+
+After this you would need to send the `phx_join` event to the server to join the Channel.
+
+## Protocol messages
+
+### Payload format
+
+All messages sent to the server or received from the server follow the same structure:
```ts
{
- "event": "phx_join",
+ "event": string,
"topic": string,
- "payload": {
- "config": {
- "broadcast": {
+ "payload": any,
+ "ref": string
+}
+```
+
+- `event`: The type of event being sent or received. This can be a specific event like `phx_join`, `postgres_changes`, etc.
+- `topic`: The topic to which the message belongs. This is usually a string that identifies the channel or context of the message.
+- `payload`: The data associated with the event. This can be any JSON-serializable data structure, such as an object or an array.
+- `ref`: A unique reference ID for the message. This is used to track the message and its response on the client side when a reply is needed to proceed.
+
+### Event types
+
+The following are the event types from the Realtime protocol:
+| Event Type | Description | Client Sent | Server Sent | Requires Ref |
+|------------|-------------|--------------|-------------|--------------|
+| `phx_join` | Initial message to join a channel and configure features | ✅ | ⛔ | ✅ |
+| `phx_close` | Message from server to signal channel closed | ⛔ | ✅ | ⛔ |
+| `phx_leave` | Message to leave a channel | ✅ | ⛔ | ✅ |
+| `phx_error` | Error message sent by the server when an error occurs | ⛔ | ✅ | ⛔ |
+| `phx_reply` | Response to a `phx_join` or other requests | ⛔ | ✅ | ⛔ |
+| `heartbeat` | Heartbeat message to keep the connection alive | ✅ | ✅ | ✅ |
+| `access_token` | Message to update the access token | ✅ | ⛔ | ⛔ |
+| `system` | System messages to inform about the status of the Postgres subscription | ⛔ | ✅ | ⛔ |
+| `broadcast` | Broadcast message sent to all clients in a channel | ✅ | ✅ | ⛔ |
+| `presence` | Presence state update sent after joining a channel | ✅ | ⛔ | ⛔ |
+| `presence_state` | Presence state sent by the server on join | ⛔ | ✅ | ⛔ |
+| `presence_diff` | Presence state diff update sent after a change in presence state | ⛔ | ✅ | ⛔ |
+| `postgres_changes` | Postgres CDC message containing changes to the database | ⛔ | ✅ | ⛔ |
+
+Each one of these events has a specific payload field structure that defines the data it carries. Below are the details for each event type payload.
+
+#### Payload of phx_join
+
+This is the initial message required to join a channel. The client sends this message to the server to join a specific topic and configure the features it wants to use, such as Postgres changes, presence, and broadcasting.
+
+```ts
+{
+ "config": {
+ "broadcast": {
+ "ack": boolean,
"self": boolean
+ },
+ "presence": {
+ "enabled": boolean,
+ "key": string
},
- "presence": {
- "key": string
- },
- "postgres_changes": [
- {
- "event": "*" | "INSERT" | "UPDATE" | "DELETE",
- "schema": string,
- "table": string,
- "filter": string + '=' + "eq" | "neq" | "gt" | "gte" | "lt" | "lte" | "in" + '.' + string
- }
- ]
- }
+ "postgres_changes": [
+ {
+ "event": string,
+ "schema": string,
+ "table": string,
+ "filter": string
+ }
+ ]
+ "private": boolean
+
},
- "ref": string
+ "access_token": string
}
```
-
+- `config`:
+ - `private`: Whether the channel is private
+ - `broadcast`: Configuration options for broadcasting messages
+ - `ack`: Acknowledge broadcast messages
+ - `self`: Include the sender in broadcast messages
+ - `presence`: Configuration options for presence tracking
+ - `enabled`: Whether presence tracking is enabled for this channel
+ - `key`: Key to be used for presence tracking, if not specified or empty, a UUID will be generated and used
+ - `postgres_changes`: Array of configurations for Postgres changes
+ - `event`: Database change event to listen to, accepts `INSERT`, `UPDATE`, `DELETE`, or `*` to listen to all events.
+ - `schema`: Schema of the table to listen to, accepts `*` wildcard to listen to all schemas
+ - `table`: Table of the database to listen to, accepts `*` wildcard to listen to all tables
+ - `filter`: Filter to be used when pulling changes from database. Read more about filters in the usage docs for [Postgres Changes](https://supabase.com/docs/guides/realtime/postgres-changes?queryGroups=language&language=js#filtering-for-specific-changes)
+- `access_token`: Optional access token for authentication, if not provided, the server will use the default access token.
+
+#### Payload of phx_close
+
+This message is sent by the server to signal that the channel has been closed. Payload will be empty object.
+
+#### Payload of phx_leave
-The `in` filter has the format `COLUMN_NAME=in.(value1,value2,value3)`. However, other filters use the format `COLUMN_NAME=FILTER_NAME.value`.
+This message is sent by the client to leave a channel. It can be used to clean up resources or stop listening for events on that channel. Payload should be empty object.
-
+#### Payload of phx_error
-In response, the server sends the Postgres configuration with a unique ID. With this ID, the client should route incoming changes to the appropriate callback.
+This message is sent by the server when an unexpected error occurs in the channel. Payload will be an empty object
+
+#### Payload of phx_reply
+
+These messages are sent by the server on messages that expect a response. Their response can vary with the type of usage.
```ts
{
- "event": "phx_reply",
- "topic": string,
- "payload": {
- "response": {
- "postgres_changes": [
- {
- "id": number,
- "event": "*" | "INSERT" | "UPDATE" | "DELETE",
- "schema": string,
- "table": string,
- "filter": string + '=' + "eq" | "neq" | "gt" | "gte" | "lt" | "lte" | "in" + '.' + string
- }
- ]
- },
- "status": "ok" | "error"
- },
- "ref": string
+ "status": string,
+ "response": any,
}
```
-## System messages
+- `status`: The status of the response, can be `ok` or `error`.
+- `response`: The response data, which can vary based on the event that was replied to
+
+##### Payload of phx_reply response to phx_join
-System message are used to inform a client about the status of the Postgres subscription. The `payload.status` indicates if the subscription successful or not.
-The body of the `payload.message` can be "Subscribed to Postgres" or "Subscribing to Postgres failed" with subscription params.
+Contains the status of the join request and any additional information requested in the `phx_join` payload.
```ts
{
- "event": "system",
- "topic": string,
- "payload":{
- "channel": string,
- "extension": "postgres_changes",
- "message": "Subscribed to PostgreSQL" | "Subscribing to PostgreSQL failed",
- "status": "ok" | "error"
- },
- "ref": null,
+ "postgres_changes": [
+ {
+ "id": number,
+ "event": string,
+ "schema": string,
+ "table": string
+ }
+ ]
}
```
-## Heartbeat
+- `postgres_changes`: Array of Postgres changes that the client is subscribed to, each object contains:
+ - `id`: Unique identifier for the Postgres changes subscription
+ - `event`: The type of event the client is subscribed to, such as `INSERT`, `UPDATE`, `DELETE`, or `*`
+ - `schema`: The schema of the table the client is subscribed to
+ - `table`: The table the client is subscribed to
+
+##### Payload of phx_reply response to presence
+
+When replying to presence events, it returns an empty object.
+
+##### Payload of phx_reply response on heartbeat
-The heartbeat message should be sent every 30 seconds to avoid a connection timeout.
+When replying to heartbeat events, it returns an empty object.
+
+#### Payload of system
+
+System messages are sent by the server to inform the client about the status of Realtime channel subscriptions.
```ts
{
- "event": "heartbeat",
- "topic": "phoenix",
- "payload": {},
- "ref": string
+ "message": string,
+ "status": string,
+ "extension": string,
+ "channel": string
}
```
-## Access token
+- `message`: A human-readable message describing the status of the subscription.
+- `status`: The status of the subscription, can be `ok`, `error`, or `timeout`.
+- `extension`: The extension that sent the message.
+- `channel`: The channel to which the message belongs, such as `realtime:room1`.
+
+#### Payload of heartbeat
-To update the access token, you need to send to the server a message specifying a new token in the `payload.access_token` value.
+The heartbeat message should be sent at least every 30 seconds to avoid a connection timeout. Payload should be empty object.
+
+#### Payload of access_token
+
+Used to setup a new token to be used by Realtime for authentication and to refresh the token to prevent the channel from closing.
```ts
{
- "event": "access_token",
- "topic": string,
- "payload":{
- "access_token": string
- },
- "ref": string
+ "access_token": string
}
```
-## Postgres CDC message
+- `access_token`: The new access token to be used for authentication. Either to change it or to refresh it.
+
+#### Payload of postgres_changes
-Realtime sends a message with the following structure. By default, the payload only includes new record changes, and the `old` entry includes the changed row's primary id. If you want to receive old records, you can set the replicate identity of your table to full. Check out [this section of the guide](/docs/guides/realtime/postgres-changes#receiving-old-records).
+Server sent message with a change from a listened schema and table. This message is sent when a change occurs in the database that the client is subscribed to. The payload contains the details of the change, including the schema, table, event type, and the new and old values.
```ts
{
- "event": "postgres_changes",
- "topic": string,
- "payload": {
- "data": {
- schema: string,
- table: string,
- commit_timestamp: string,
- eventType: "*" | "INSERT" | "UPDATE" | "DELETE",
- new: {[key: string]: boolean | number | string | null},
- old: {[key: string]: number | string},
- errors: string | null
+ ,
+ "ids": [
+ number
+ ],
+ "data": {
+ "schema": string,
+ "table": string,
+ "commit_timestamp": string,
+ "eventType": "*" | "INSERT" | "UPDATE" | "DELETE",
+ "new": {
+ [key: string]: boolean | number | string | null
},
- "ids": Array
- },
- "ref": null
+ "old": {
+ [key: string]: boolean | number | string | null
+ },
+ "errors": string | null,
+ "latency": number
+ }
}
```
-## Broadcast message
+- `ids`: An array of unique identifiers for the changes that occurred.
+- `data`: An object containing the details of the change:
+ - `schema`: The schema of the table where the change occurred.
+ - `table`: The table where the change occurred.
+ - `commit_timestamp`: The timestamp when the change was committed to the database.
+ - `eventType`: The type of event that occurred, such as `INSERT`, `UPDATE`, `DELETE`, or `*` for all events.
+ - `new`: An object representing the new values after the change, with keys as column names and values as their corresponding values.
+ - `old`: An object representing the old values before the change, with keys as column names and values as their corresponding values.
+ - `errors`: Any errors that occurred during the change, if applicable.
+ - `latency`: The latency of the change event, in milliseconds.
+
+### Payload of broadcast
-Structure of the broadcast event
+Structure of the broadcast event to be sent to all clients in a channel. The `payload` field contains the event name and the data to broadcast.
```ts
{
- "event": "broadcast",
- "topic": string,
- "payload": {
- "event": string,
- "payload": {[key: string]: boolean | number | string | null | undefined},
- "type": "broadcast"
- },
- "ref": null
+ "event": string,
+ "payload": json,
+ "type": "broadcast"
}
```
-## Presence message
+- `event`: The name of the event to broadcast.
+- `payload`: The data associated with the event, which can be any JSON-serializable data structure.
+- `type`: The type of message, which is always `broadcast` for broadcast messages.
-The Presence events allow clients to monitor the online status of other clients in real-time.
+### Payload of presence
-### State update
+Presence messages are used to track the online status of clients in a channel. When a client joins or leaves a channel, a presence message is sent to all clients in that channel.
+
+### Payload of presence_state
After joining, the server sends a `presence_state` message to a client with presence information. The payload field contains keys in UUID format, where each key represents a client and its value is a JSON object containing information about that client.
```ts
{
- "event": "presence_state",
- "topic": string,
- "payload": {
- [key: string]: {metas: Array<{phx_ref: string, name: string, t: float}>}
- },
- "ref": null
+ [key: string]: {
+ metas: [
+ {
+ phx_ref: string,
+ name: string,
+ t: float
+ }
+ ]
+ }
}
```
-### Diff update
+- `key`: The UUID of the client.
+- `metas`: An array of metadata objects for the client, each containing:
+ - `phx_ref`: A unique reference ID for the metadata.
+ - `name`: The name of the client.
+ - `t`: A timestamp indicating when the client joined or last updated its presence state.
+
+### Payload of presence_diff
After a change to the presence state, such as a client joining or leaving, the server sends a presence_diff message to update the client's view of the presence state. The payload field contains two keys, `joins` and `leaves`, which represent clients that have joined and left, respectively. The values associated with each key are UUIDs of the clients.
```ts
{
- "event": "presence_diff",
- "topic": string,
- "payload": {
- "joins": {metas: Array<{phx_ref: string, name: string, t: float}>},
- "leaves": {metas: Array<{phx_ref: string, name: string, t: float}>}
+ "joins": {
+ metas: [{
+ phx_ref: string,
+ name: string,
+ t: float
+ }]
},
- "ref": null
+ "leaves": {
+ metas: [{
+ phx_ref: string,
+ name: string,
+ t: float
+ }]
+ }
}
```
+
+- `joins`: An object containing metadata for clients that have joined the channel, with keys as UUIDs and values as metadata objects.
+- `leaves`: An object containing metadata for clients that have left the channel, with keys as UUIDs and values as metadata objects.
+
+## REST API
+
+The Realtime protocol is primarily designed for WebSocket communication, but it can also be accessed via a REST API. This allows you to interact with the Realtime service using standard HTTP methods.
diff --git a/apps/docs/content/guides/realtime/settings.mdx b/apps/docs/content/guides/realtime/settings.mdx
new file mode 100644
index 0000000000000..da1b323aaafaf
--- /dev/null
+++ b/apps/docs/content/guides/realtime/settings.mdx
@@ -0,0 +1,35 @@
+---
+title: 'Settings'
+description: 'Realtime Settings that allow you to configure your Realtime usage.'
+subtitle: 'Realtime Settings that allow you to configure your Realtime usage.'
+---
+
+## Settings
+
+
+
+Realtime settings are currently under Feature Preview section in the dashboard.
+
+
+
+
+
+All changes made in this screen will disconnect all your connected clients to ensure Realtime starts with the appropriate settings and all changes are stored in Supabase middleware.
+
+
+
+
+
+You can set the following settings using the Realtime Settings screen in your Dashboard:
+
+- Channel Restrictions: You can toggle this settings to set Realtime to allow public channels or set it to use only private channels with [Realtime Authorization](/docs/content/guides/realtime/authorization).
+- Database connection pool size: Determines the number of connections used for Realtime Authorization RLS checking
+ {/* supa-mdx-lint-disable-next-line Rule004ExcludeWords */}
+- Max concurrent clients: Determines the maximum number of clients that can be connected
diff --git a/apps/docs/public/img/guides/platform/realtime/architecture--black.png b/apps/docs/public/img/guides/platform/realtime/architecture--black.png
new file mode 100644
index 0000000000000..f4dacf3fe2ccc
Binary files /dev/null and b/apps/docs/public/img/guides/platform/realtime/architecture--black.png differ
diff --git a/apps/docs/public/img/guides/platform/realtime/architecture--dark.png b/apps/docs/public/img/guides/platform/realtime/architecture--dark.png
new file mode 100644
index 0000000000000..f4dacf3fe2ccc
Binary files /dev/null and b/apps/docs/public/img/guides/platform/realtime/architecture--dark.png differ
diff --git a/apps/docs/public/img/guides/platform/realtime/architecture--light.png b/apps/docs/public/img/guides/platform/realtime/architecture--light.png
new file mode 100644
index 0000000000000..55a5226cd266b
Binary files /dev/null and b/apps/docs/public/img/guides/platform/realtime/architecture--light.png differ
diff --git a/apps/docs/public/img/guides/platform/realtime-settings--dark.png b/apps/docs/public/img/guides/platform/realtime/realtime-settings--dark.png
similarity index 100%
rename from apps/docs/public/img/guides/platform/realtime-settings--dark.png
rename to apps/docs/public/img/guides/platform/realtime/realtime-settings--dark.png
diff --git a/apps/docs/public/img/guides/platform/realtime-settings--light.png b/apps/docs/public/img/guides/platform/realtime/realtime-settings--light.png
similarity index 100%
rename from apps/docs/public/img/guides/platform/realtime-settings--light.png
rename to apps/docs/public/img/guides/platform/realtime/realtime-settings--light.png
diff --git a/apps/www/_blog/2025-07-14-supabase-ui-platform-kit.mdx b/apps/www/_blog/2025-07-14-supabase-ui-platform-kit.mdx
index d8dd21a88c0d4..b15e807f388cc 100644
--- a/apps/www/_blog/2025-07-14-supabase-ui-platform-kit.mdx
+++ b/apps/www/_blog/2025-07-14-supabase-ui-platform-kit.mdx
@@ -118,4 +118,4 @@ If you’re building your own plaform, reach out to the team: [supabase.com/solu
Head on over to the docs to get started: [ui.supabase.com](https://supabase.com/ui/docs/platform/platform-kit)
-You can also check out the [repo](https://github.com/supabase/supabase-embedded-dashboard) to download and run the app yourself to see how it works, or just pull key files and components to use in your own implementation. The code is yours to use and improve however you see fit.
+You can also check out the [repo](https://github.com/supabase/supabase/tree/master/apps/ui-library/registry/default/platform) to download and run the app yourself to see how it works, or just pull key files and components to use in your own implementation. The code is yours to use and improve however you see fit.
diff --git a/apps/www/_blog/2025-07-17-algolia-connector-for-supabase.mdx b/apps/www/_blog/2025-07-17-algolia-connector-for-supabase.mdx
index c0f0b33bd3575..203fbd8ed4c18 100644
--- a/apps/www/_blog/2025-07-17-algolia-connector-for-supabase.mdx
+++ b/apps/www/_blog/2025-07-17-algolia-connector-for-supabase.mdx
@@ -40,7 +40,7 @@ Read on to see how the Algolia Connector for Supabase works.
## How to use Algolia Connector for Supabase
-To get started with Algolia’s connector, prepare the data in your Supabase database, create Supabase as a source in Algolia’s dashboard, set up your Algolia index and configure your sync job. Here’s how you can [get started](https://www.algolia.com/doc/guides/sending-and-managing-data/send-and-update-your-data/connectors/supabase) in just a few minutes.
+To get started with Algolia’s connector, prepare the data in your Supabase database, create Supabase as a source in Algolia’s dashboard, set up your Algolia index and configure your sync job. Here’s how you can [get started](https://www.algolia.com/doc/guides/sending-and-managing-data/send-and-update-your-data/connectors/supabase?utm_medium=referral&utm_source=supabase&utm_campaign=supabase_blog) in just a few minutes.
### 1. Prepare your data in Supabase
@@ -67,8 +67,8 @@ Later in the Algolia dashboard, you will be able to pick exactly which columns y
### 2. Go to Algolia dashboard
-1. In Algolia, go to [Data Sources → Connectors](https://dashboard.algolia.com/connectors)
-2. Find "Supabase" in the list and click [Connect](https://dashboard.algolia.com/connectors/supabase/create)
+1. In Algolia, go to [Data Sources → Connectors](https://dashboard.algolia.com/connectors?utm_medium=referral&utm_source=supabase&utm_campaign=supabase_blog)
+2. Find "Supabase" in the list and click [Connect](https://dashboard.algolia.com/connectors/supabase/create?utm_medium=referral&utm_source=supabase&utm_campaign=supabase_blog)
### 3. Configure your data source
@@ -95,7 +95,7 @@ Once you create Supabase as a data source, you'll need to tell Algolia where to
1. Choose how often you want it to sync your data (e.g. every 6 hours)
2. Pick whether to do full syncs or partial updates
3. Select the table or view you want to index. We recommend selecting only one table or view for each index
-4. Choose your [objectID](https://www.algolia.com/doc/guides/sending-and-managing-data/prepare-your-data/in-depth/what-is-in-a-record/#unique-record-identifier) (usually your primary key)
+4. Choose your [objectID](https://www.algolia.com/doc/guides/sending-and-managing-data/prepare-your-data/in-depth/what-is-in-a-record/#unique-record-identifier?utm_medium=referral&utm_source=supabase&utm_campaign=supabase_blog) (usually your primary key)
Once configured, create the task. Algolia will start syncing records from Supabase into your search index (in the YouTube demo above, 8,800+ movie records were synced in under a minute).
@@ -108,4 +108,4 @@ With the Algolia + Supabase connector, you don’t need to build or maintain cus
## Getting Started
1. [Supabase](/dashboard)
-2. [Algolia](https://dashboard.algolia.com/users/sign_up)
+2. [Algolia](https://dashboard.algolia.com/users/sign_up?utm_medium=referral&utm_source=supabase&utm_campaign=supabase_blog)
diff --git a/apps/www/_blog/2025-07-18-launch-week-15-top-10.mdx b/apps/www/_blog/2025-07-18-launch-week-15-top-10.mdx
new file mode 100644
index 0000000000000..fdfdf0c6b28de
--- /dev/null
+++ b/apps/www/_blog/2025-07-18-launch-week-15-top-10.mdx
@@ -0,0 +1,92 @@
+---
+title: 'Top 10 Launches of Launch Week 15'
+description: Highlights from Launch Week 15
+author: wenbo
+image: launch-week-15/wrap-up/og.png
+thumb: launch-week-15/wrap-up/thumb.png
+launchweek: '15'
+categories:
+ - launch-week
+tags:
+ - launch-week
+date: '2025-07-18T15:00:00'
+toc_depth: 3
+---
+
+Here are the top 10 launches from the past week. They're all very exciting so make sure to check out every single one.
+
+## #1: New API Keys + JWT Signing Keys
+
+Supabase Platform released new API keys, Publishable and Secret, and Supabase Auth now supports asymmetric JWTs with Elliptic Curve and RSA cryptographic algorithms. These changes improve the performance, reliability, and security of your Supabase projects.
+
+[Read more](https://supabase.com/blog/jwt-signing-keys)
+
+## #2: Analytics Buckets with Apache Iceberg Support
+
+We launched Supabase Analytics Buckets in Private Alpha—storage buckets optimized for analytics with built-in support for Apache Iceberg. We’ve coupled this with the new Supabase Iceberg Wrapper to make it easier for you to query your analytical data.
+
+[Read more](https://supabase.com/blog/analytics-buckets)
+
+## #3: OpenTelemetry Support
+
+We’ve added support for OpenTelementry (OTel) across our services so you can soon send logs, metrics, and traces to any OTel-compatible tooling. We’ve also unified logs under a single interface in our Dashboard as well as added new capabilities to our AI Assistant to improve the debugging experience.
+
+[Read more](https://supabase.com/blog/new-observability-features-in-supabase)
+
+## #4: Build with Figma Make and Supabase
+
+We’ve partnered with Figma so you can hook up a Supabase backend to your Figma Make project, enabling you to persist data and tap into the suite of Supabase products to help you build prototypes quickly and scale them when you gain traction.
+
+[Read more](https://supabase.com/blog/figma-make-support-for-supabase)
+
+## #5: Storage: 500 GB Uploads and Cheaper Cached Egress
+
+You can now upload files as large as 500 GB (up from 50 GB), enjoy much cheaper cached egress pricing at $0.03/GB (down from 0.09/GB), and increased egress quota that doubles your egress before you have to start paying.
+
+[Read more](https://supabase.com/blog/storage-500gb-uploads-cheaper-egress-pricing)
+
+## #6: Edge Functions: Deno 2, 97% Faster Boot Times, and Persistent File Storage
+
+Edge Functions now support Deno 2.1, persistent file storage so you can mount any S3-compatible storage and read and write to them inside of your functions, up to 97% faster boot times, and support for Deno’s Sync APIs.
+
+[Read more](https://supabase.com/blog/persistent-storage-for-faster-edge-functions)
+
+## #7: Branching 2.0: GitHub Optional
+
+You can now spin up, view diffs, and merge your branches directly from the Supabase Dashboard without having to connect to GitHub.
+
+[Read more](https://supabase.com/blog/branching-2-0)
+
+## #8: Supabase UI: Platform Kit
+
+We’ve built out several UI components to make it easy for you to feature the core of Supabase Dashboard inside your own app so you or your users can interact with Supabase projects natively with a customizable interface.
+
+[Read more](https://supabase.com/blog/supabase-ui-platform-kit)
+
+## #9: Stripe-To-Postgres Sync Engine as an NPM Package
+
+Now you can conveniently sync your Stripe data to your Supabase database by importing the npm package @supabase/stripe-sync-engine, whether in your Node.js app or even deploying it in a Supabase Edge Function.
+
+[Read more](https://supabase.com/blog/stripe-engine-as-sync-library)
+
+## #10: Algolia Connector for Supabase
+
+We’ve been collaborating closely with Algolia to bring you a connector for Supabase so you can easily index your data and enable world class search experiences.
+
+[Read more](https://supabase.com/blog/algolia-connector-for-supabase)
+
+## Launch Week Continues
+
+There's always more activities for you to get involved with:
+
+### Launch Week 15: Meetups
+
+Our community is hosting more meetups around the world. This is your chance to engage with others building with Supabase in a city near you.
+
+[See events](https://supabase.com/events?category=meetup)
+
+### Launch Week 15: Hackathon
+
+We've got another hackathon that you wouldn't want to miss! Now's your chance to vibe code something amazing, show it off to the community, and win some limited edition Supabase swag.
+
+[Read more](https://supabase.com/blog/lw15-hackathon)
diff --git a/apps/www/_blog/2025-07-18-lw15-hackathon.mdx b/apps/www/_blog/2025-07-18-lw15-hackathon.mdx
new file mode 100644
index 0000000000000..27066cdea2453
--- /dev/null
+++ b/apps/www/_blog/2025-07-18-lw15-hackathon.mdx
@@ -0,0 +1,83 @@
+---
+title: 'Supabase Launch Week 15 Hackathon'
+description: Build an Open Source Project over 10 days. 5 prize categories.
+author: tyler_shukert
+image: launch-week-15/hackathon/lw15-hackathon.png
+thumb: launch-week-15/hackathon/lw15-hackathon.png
+launchweek: '15'
+categories:
+ - launch-week
+tags:
+ - launch-week
+ - hackathon
+date: '2025-07-18:11:00'
+toc_depth: 2
+---
+
+We have just concluded [Launch Week 15 with so many new updates](https://supabase.com/launch-week), but no launch week is complete without a hackathon! The Supabase Launch Week 15 Hackathon begins now! Open your favorite IDE or AI agent and start building!
+
+As of the time of publishing this blog post, the hackathon has begun and will conclude on Sunday, July 27th, at 11:59 pm PT. You could win an extremely limited edition Supabase swag and add your name to the Supabase Hackathon Hall of Fame.
+
+For some inspiration, check out all the [winners from previous hackathons](https://supabase.com/blog/tags/hackathon).
+
+This is the perfect excuse to "Build in a weekend, scale to millions.” Since you retain all the rights to your submissions, you can use the hackathon as a launch pad for your new Startup ideas, side projects, or indie hacks.
+
+## Key Facts
+
+- You have 10 days to build a new o**pen-source** project using Supabase in some capacity
+ - Starting 10:00 am PT Friday, July 18th, 2025
+ - The submission deadline is 11:59 pm Sunday, midnight PT, July 27th, 2025
+- Enter as an individual or as a team of up to 4 people
+- Build whatever you want - a project, app, tool, or library. Anything.
+- 1-minute video containing the following:
+ - Name of the project
+ - Demonstration of the project
+ - How Supabase is used within the project
+- [Here is an example video](https://youtu.be/KaWJQzTTx5k). We do not assess the quality of the video itself. Remember to keep it concise.
+
+## Prizes
+
+There are 5 categories, and there will be prizes for:
+
+- Best overall project
+- Best use of AI
+- Most fun / best easter egg
+- Most technically impressive
+- Most visually pleasing
+
+There will be a winner and a runner-up prize for each category. Every team member on winning/runner-up teams gets a Supabase Launch Week swag kit, and the winner of the best overall project will get this cool mechanical keyboard as well!
+
+## Submission
+
+You should submit your project from the submission form before 11:59 pm Sunday midnight PT, July 27th, 2025. The submission form will be put up here on this article before the deadline. Come back in about a week to find it!
+
+## Judges
+
+The Supabase team will judge the winners for each category.
+We will be looking for:
+
+- Creativity/inventiveness
+- Functions correctly/smoothly
+- Visually pleasing
+- Technically impressive
+- Use of Supabase features
+- FUN! 😃
+
+## Rules
+
+- Team size 1-4 (all team members on winning teams will receive a prize)
+- You cannot be on multiple teams
+- One submission per team
+- It's not a requirement to use AI
+- All design elements, code, etc., for your project must be created **during** the event
+ - Using frameworks/ libraries is fine
+- All entries must be Open Source (link to source code required in entry)
+- Must use Supabase in some capacity
+- Can be any language or framework
+- You must submit before the deadline (no late entries)
+- Include a link to a 1-minute demo video
+
+## Additional Info
+
+- Any intellectual property developed during the hackathon will belong to the team that developed it. We expect that each team will have an agreement between themselves regarding the IP, but this is not required.
+- By making a submission, you grant Supabase permission to use screenshots, code snippets, and/or links to your project or content of your README on our Twitter, blog, website, email updates, and in the Supabase discord server. Supabase does not make any claims over your IP.
diff --git a/apps/www/_blog/2025-07-18-storage-500gb-uploads-cheaper-egress-pricing.mdx b/apps/www/_blog/2025-07-18-storage-500gb-uploads-cheaper-egress-pricing.mdx
new file mode 100644
index 0000000000000..2b68daa9f0277
--- /dev/null
+++ b/apps/www/_blog/2025-07-18-storage-500gb-uploads-cheaper-egress-pricing.mdx
@@ -0,0 +1,74 @@
+---
+title: 'Storage: 10x Larger Uploads, 3x Cheaper Cached Egress, and 2x Egress Quota'
+description: 'Upload files up to 500 GB with significant egress cost reductions.'
+categories:
+ - launch-week
+tags:
+ - launch-week
+ - algolia
+date: '2025-07-18:10:00'
+toc_depth: 2
+author: inian
+image: launch-week-15/day-5-storage-cheaper-egress/og.png
+thumb: launch-week-15/day-5-storage-cheaper-egress/thumb.png
+launchweek: '15'
+---
+
+We're very excited to announce [Supabase Storage](/storage) is getting better for everyone. We are:
+
+- Increasing the maximum file size to 500 GB, up from 50 GB
+- Reducing egress costs for requests cached by our API Gateway is charged at $0.03/GB, down from $0.09/GB
+- Free plans get 5 GB of cached egress in addition to 5 GB of uncached egress. All paid plans get 250 GB of cached egress and 250 GB of uncached egress, bundled in.
+
+The 500 GB limit for individual files is available for all paid plans starting next week. Lower cached egress pricing and increased quotas for cached egress will be rolling out gradually to all users over the next few weeks and will take effect at the end of your current billing cycle. This should be a price reduction for all users for Storage.
+
+## 10x Larger Uploads
+
+Our community has asked for better support for increasingly large files, from high resolution video platforms and media heavy applications to SaaS platforms handling user generated data, storing 3D models and data archival.
+
+We have made several optimizations to our platform infrastructure and API gateway to ensure reliable handling of very large files, allowing us to increase the limit from 50 GB to 500 GB for all paid plans.
+
+Once it's released next week, you can take advantage of this feature by setting the new upload size limit [here](/dashboard/project/_/settings/storage) and use the new storage-specific hostname for your uploads. You can do this by adding `storage` after your project ref in the standard Supabase url. Replace `project-ref.supabase.co` with `project-ref.storage.supabase.co`. The older URL format will continue to work.
+
+For uploading large files, we recommend using one of our multipart upload options:
+
+- [**Resumable uploads using TUS**](/docs/guides/storage/uploads/resumable-uploads) - Perfect for cases where network interruptions might occur, allowing uploads to resume from where they left off
+- [**S3 protocol multipart uploads**](/docs/guides/storage/uploads/s3-uploads) - Ideal for applications that need S3-compatible upload workflows
+
+Both approaches automatically handle breaking large files into manageable chunks during upload while presenting them as single objects for download.
+
+## 3x Cheaper Cached Egress
+
+All Supabase traffic flows through our API Gateway, which also functions as a content delivery network (CDN). When an asset is cached at the edge (and frequently accessed storage objects typically are), the CDN delivers it immediately. If it isn't cached, the request is forwarded to the region hosting your Supabase project before returning to the user.
+
+Initially, we leaned towards keeping our pricing model simple instead of reflecting regional and cache-status variations in egress costs. This unfortunately meant that customers with very high cached storage bandwidth couldn't benefit from our lower cached egress rates.
+
+
+
+
+
+Today, we are introducing a new pricing line item and are able to offer cached egress at a much lower rate of $0.03/GB. Combined with the [Smart CDN for storage](/docs/guides/storage/cdn/smart-cdn), which increases the cache hit rate for storage significantly, this would significantly reduce egress bill for our largest storage users.
+
+## 2x Egress Quota
+
+Paid plans previously included 250 GB of unified egress. We've now split that into 250 GB of cached egress and 250 GB of uncached egress, so customers with high cache hit rates effectively get twice the free egress. Free plans now include 5 GB of cached egress alongside 5 GB of uncached egress.
+
+## What Will You Build?
+
+Check out [Analytics Buckets](/blog/analytics-buckets), the other Storage launch this launch week, and how we built persistent file storage for edge functions with Storage here.
+
+If you have any requests for improving Supabase Storage, [let us know](https://x.com/supabase)!
diff --git a/apps/www/components/LaunchWeek/15/data/lw15_build_stage.tsx b/apps/www/components/LaunchWeek/15/data/lw15_build_stage.tsx
index 30b751c07c6fa..ced89da85d572 100644
--- a/apps/www/components/LaunchWeek/15/data/lw15_build_stage.tsx
+++ b/apps/www/components/LaunchWeek/15/data/lw15_build_stage.tsx
@@ -1,9 +1,5 @@
-// see apps/www/components/LaunchWeek/13/Releases/data/lw13_build_stage.tsx for reference
-
-import { ReactNode } from 'react'
-import { type ClassValue } from 'clsx'
-import { PRODUCT_MODULES } from 'shared-data/products'
-import { AppWindow, Database, Globe } from 'lucide-react'
+import type { ClassValue } from 'clsx'
+import type { ReactNode } from 'react'
export interface BuildDay {
icon?: ReactNode // use svg jsx with 34x34px viewport
@@ -27,7 +23,6 @@ export interface BuildDayLink {
export const days: BuildDay[] = [
{
title: 'Supabase UI: Platform Kit',
- description: '',
id: 'platform-kit',
is_shipped: true,
links: [
@@ -40,7 +35,6 @@ export const days: BuildDay[] = [
},
{
title: 'Create a Supabase backend using Figma Make',
- description: '',
id: 'figma',
is_shipped: true,
links: [
@@ -53,7 +47,6 @@ export const days: BuildDay[] = [
},
{
title: 'Introducing stripe-sync-engine npm package',
- description: '',
id: 'stripe-engine',
is_shipped: true,
links: [
@@ -66,7 +59,6 @@ export const days: BuildDay[] = [
},
{
title: 'Improved Security Controls and A New Home for Security',
- description: '',
id: 'security-homepage',
is_shipped: true,
links: [
@@ -79,7 +71,6 @@ export const days: BuildDay[] = [
},
{
title: 'Algolia Connector for Supabase',
- description: '',
id: 'algolia-connector',
is_shipped: true,
links: [
@@ -91,13 +82,12 @@ export const days: BuildDay[] = [
],
},
{
- title: '',
- description: '',
- id: '',
- is_shipped: false,
+ title: 'Storage: 10x Larger Uploads, 3x Cheaper Cached Egress & 2x Egress Quota',
+ id: 'cheaper-egress',
+ is_shipped: true,
links: [
{
- url: '/blog/',
+ url: '/blog/storage-500gb-uploads-cheaper-egress-pricing',
label: 'Blog post',
target: '_blank',
},
diff --git a/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/og.png b/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/og.png
new file mode 100644
index 0000000000000..7ddcd8174556c
Binary files /dev/null and b/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/og.png differ
diff --git a/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/storage-cached-egress-dark.png b/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/storage-cached-egress-dark.png
new file mode 100644
index 0000000000000..fb7470d6c50c8
Binary files /dev/null and b/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/storage-cached-egress-dark.png differ
diff --git a/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/storage-cached-egress-light.png b/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/storage-cached-egress-light.png
new file mode 100644
index 0000000000000..64c02fdd12e54
Binary files /dev/null and b/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/storage-cached-egress-light.png differ
diff --git a/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/storage-uncached-egress-dark.png b/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/storage-uncached-egress-dark.png
new file mode 100644
index 0000000000000..169a663bd3b89
Binary files /dev/null and b/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/storage-uncached-egress-dark.png differ
diff --git a/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/storage-uncached-egress-light.png b/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/storage-uncached-egress-light.png
new file mode 100644
index 0000000000000..18cd2477acce2
Binary files /dev/null and b/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/storage-uncached-egress-light.png differ
diff --git a/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/thumb.png b/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/thumb.png
new file mode 100644
index 0000000000000..4704e7dc62e7e
Binary files /dev/null and b/apps/www/public/images/blog/launch-week-15/day-5-storage-cheaper-egress/thumb.png differ
diff --git a/apps/www/public/images/blog/launch-week-15/hackathon/lw15-hackathon.png b/apps/www/public/images/blog/launch-week-15/hackathon/lw15-hackathon.png
new file mode 100644
index 0000000000000..ed39b4c63832f
Binary files /dev/null and b/apps/www/public/images/blog/launch-week-15/hackathon/lw15-hackathon.png differ
diff --git a/apps/www/public/images/blog/launch-week-15/wrap-up/og.png b/apps/www/public/images/blog/launch-week-15/wrap-up/og.png
new file mode 100644
index 0000000000000..8b40ea58891b2
Binary files /dev/null and b/apps/www/public/images/blog/launch-week-15/wrap-up/og.png differ
diff --git a/apps/www/public/images/blog/launch-week-15/wrap-up/thumb.png b/apps/www/public/images/blog/launch-week-15/wrap-up/thumb.png
new file mode 100644
index 0000000000000..f529dcc68c40c
Binary files /dev/null and b/apps/www/public/images/blog/launch-week-15/wrap-up/thumb.png differ
diff --git a/apps/www/public/rss.xml b/apps/www/public/rss.xml
index bfd1b14f68c5c..d486e2c8e6588 100644
--- a/apps/www/public/rss.xml
+++ b/apps/www/public/rss.xml
@@ -8,6 +8,27 @@
Fri, 18 Jul 2025 00:00:00 -0700
+ https://supabase.com/blog/launch-week-15-top-10
+ Top 10 Launches of Launch Week 15
+ https://supabase.com/blog/launch-week-15-top-10
+ Highlights from Launch Week 15
+ Fri, 18 Jul 2025 00:00:00 -0700
+
+
+ https://supabase.com/blog/lw15-hackathon
+ Supabase Launch Week 15 Hackathon
+ https://supabase.com/blog/lw15-hackathon
+ Build an Open Source Project over 10 days. 5 prize categories.
+ Fri, 18 Jul 2025 00:00:00 -0700
+
+
+ https://supabase.com/blog/storage-500gb-uploads-cheaper-egress-pricing
+ Storage: 10x Larger Uploads, 3x Cheaper Cached Egress, and 2x Egress Quota
+ https://supabase.com/blog/storage-500gb-uploads-cheaper-egress-pricing
+ Upload files up to 500 GB with significant egress cost reductions.
+ Fri, 18 Jul 2025 00:00:00 -0700
+
+https://supabase.com/blog/persistent-storage-for-faster-edge-functionsPersistent Storage and 97% Faster Cold Starts for Edge Functions
https://supabase.com/blog/persistent-storage-for-faster-edge-functions
@@ -259,20 +280,6 @@
Technical deep dive into the new DBOS integration for SupabaseTue, 10 Dec 2024 00:00:00 -0700
-
- https://supabase.com/blog/hack-the-base
- Hack the Base! with Supabase
- https://supabase.com/blog/hack-the-base
- Play cool games, win cool prizes.
- Fri, 06 Dec 2024 00:00:00 -0700
-
-
- https://supabase.com/blog/launch-week-13-top-10
- Top 10 Launches of Launch Week 13
- https://supabase.com/blog/launch-week-13-top-10
- Highlights from Launch Week 13
- Fri, 06 Dec 2024 00:00:00 -0700
-https://supabase.com/blog/database-build-v2database.build v2: Bring-your-own-LLM
@@ -287,6 +294,20 @@
Effortlessly Clone Data into a New Supabase ProjectFri, 06 Dec 2024 00:00:00 -0700
+
+ https://supabase.com/blog/hack-the-base
+ Hack the Base! with Supabase
+ https://supabase.com/blog/hack-the-base
+ Play cool games, win cool prizes.
+ Fri, 06 Dec 2024 00:00:00 -0700
+
+
+ https://supabase.com/blog/launch-week-13-top-10
+ Top 10 Launches of Launch Week 13
+ https://supabase.com/blog/launch-week-13-top-10
+ Highlights from Launch Week 13
+ Fri, 06 Dec 2024 00:00:00 -0700
+https://supabase.com/blog/supabase-queuesSupabase Queues