diff --git a/CHANGELOG.md b/CHANGELOG.md index f00b37f..a64372a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,7 +5,7 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). -## 0.3.4 +## [0.3.4] - 2025-03-25 ### Changed diff --git a/README.md b/README.md index 00e77e6..f549dfc 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,8 @@ # Phoenix.Sync +Real-time sync for Postgres-backed [Phoenix](https://www.phoenixframework.org/) applications. +

-
Phoenix sync illustration -

[![Hex.pm](https://img.shields.io/hexpm/v/phoenix_sync.svg)](https://hex.pm/packages/phoenix_sync) @@ -18,29 +18,30 @@ [![Status](https://img.shields.io/badge/status-beta-orange)](https://github.com/electric-sql/phoenix_sync) [![Discord](https://img.shields.io/discord/933657521581858818?color=5969EA&label=discord)](https://discord.electric-sql.com) -Sync is the best way of building modern apps. Phoenix.Sync enables real-time sync for Postgres-backed [Phoenix](https://www.phoenixframework.org/) applications. - Documentation is available at [hexdocs.pm/phoenix_sync](https://hexdocs.pm/phoenix_sync). -## Build real-time apps on locally synced data +## Build real-time apps on sync + +Phoenix.Sync is a library that adds real-time sync to Postgres-backed [Phoenix](https://www.phoenixframework.org/) applications. Use it to sync data into both LiveView and front-end web and mobile applications. -- sync data into Elixir, `LiveView` and frontend web and mobile applications - integrates with `Plug` and `Phoenix.{Controller, LiveView, Router, Stream}` -- uses [ElectricSQL](https://electric-sql.com) for scalable data delivery and fan out -- maps `Ecto` queries to [Shapes](https://electric-sql.com/docs/guides/shapes) for partial replication +- uses [ElectricSQL](https://electric-sql.com) for core sync, fan-out and data delivery +- maps `Ecto.Query`s to [Shapes](https://electric-sql.com/docs/guides/shapes) for partial replication -## Usage +There are four key APIs for [read-path sync](#read-path-sync) out of Postgres: -There are four key APIs: +- `Phoenix.Sync.Client.stream/2` for low level usage in Elixir +- `Phoenix.Sync.LiveView.sync_stream/4` to sync into a LiveView +- `Phoenix.Sync.Router.sync/2` macro to expose a shape in your Router +- `Phoenix.Sync.Controller.sync_render/3` to return shapes from a Controller -- [`Phoenix.Sync.Client.stream/2`](https://hexdocs.pm/phoenix_sync/Phoenix.Sync.Client.html#stream/2) for low level usage in Elixir -- [`Phoenix.Sync.LiveView.sync_stream/4`](https://hexdocs.pm/phoenix_sync/Phoenix.Sync.LiveView.html#sync_stream/4) to sync into a LiveView stream -- [`Phoenix.Sync.Router.sync/2`](https://hexdocs.pm/phoenix_sync/Phoenix.Sync.Router.html#sync/2) macro to expose a statically defined shape in your Router -- [`Phoenix.Sync.Controller.sync_render/3`](https://hexdocs.pm/phoenix_sync/Phoenix.Sync.Controller.html#sync_render/3) to expose dynamically constructed shapes from a Controller +And a `Phoenix.Sync.Writer` module for handling [write-path sync](#write-path-sync) back into Postgres. + +## Read-path sync ### Low level usage in Elixir -Use [`Phoenix.Sync.Client.stream/2`](https://hexdocs.pm/phoenix_sync/Phoenix.Sync.Client.html#stream/2) to convert an `Ecto.Query` into an Elixir `Stream`: +Use `Phoenix.Sync.Client.stream/2` to convert an `Ecto.Query` into an Elixir `Stream`: ```elixir stream = Phoenix.Sync.Client.stream(Todos.Todo) @@ -52,7 +53,7 @@ stream = ### Sync into a LiveView stream -Swap out `Phoenix.LiveView.stream/3` for [`Phoenix.Sync.LiveView.sync_stream/4`](https://hexdocs.pm/phoenix_sync/Phoenix.Sync.LiveView.html#sync_stream/4) to automatically keep a LiveView up-to-date with the state of your Postgres database: +Swap out `Phoenix.LiveView.stream/3` for `Phoenix.Sync.LiveView.sync_stream/4` to automatically keep a LiveView up-to-date with the state of your Postgres database: ```elixir defmodule MyWeb.MyLive do @@ -75,7 +76,7 @@ This means you can build fully end-to-end real-time multi-user applications with ### Sync shapes through your Router -Use the [`Phoenix.Sync.Router.sync/2`](https://hexdocs.pm/phoenix_sync/Phoenix.Sync.Router.html#sync/2) macro to expose statically (compile-time) defined shapes in your Router: +Use the `Phoenix.Sync.Router.sync/2` macro to expose statically (compile-time) defined shapes in your Router: ```elixir defmodule MyWeb.Router do @@ -98,7 +99,7 @@ Because the shapes are exposed through your Router, the client connects through ### Sync dynamic shapes from a Controller -Sync shapes from any standard Controller using the [`Phoenix.Sync.Controller.sync_render/3`](https://hexdocs.pm/phoenix_sync/Phoenix.Sync.Controller.html#sync_render/3) view function: +Sync shapes from any standard Controller using the `Phoenix.Sync.Controller.sync_render/3` view function: ```elixir defmodule Phoenix.Sync.LiveViewTest.TodoController do @@ -122,7 +123,7 @@ This allows you to define and personalise the shape definition at runtime using You can sync _into_ any client in any language that [speaks HTTP and JSON](https://electric-sql.com/docs/api/http). -For example, using the Electric [Typescript client](https://electric-sql.com/docs/api/clients/typescript): +For example, using the Electric [TypeScript client](https://electric-sql.com/docs/api/clients/typescript): ```typescript import { Shape, ShapeStream } from "@electric-sql/client"; @@ -152,6 +153,51 @@ const MyComponent = () => { See the Electric [demos](https://electric-sql.com/demos) and [documentation](https://electric-sql.com/demos) for more client-side usage examples. +## Write-path sync + +The `Phoenix.Sync.Writer` module allows you to ingest batches of writes from the client. + +The idea is that the front-end can batch up [local optimistic writes](https://electric-sql.com/docs/guides/writes). For example using a library like [@TanStack/optimistic](https://github.com/TanStack/optimistic) or by [monitoring changes to a local embedded database](https://electric-sql.com/docs/guides/writes#through-the-db). + +These changes can be POSTed to a `Phoenix.Controller`, which then constructs a `Phoenix.Sync.Writer` instance. The writer instance authorizes and validates the writes before applying them to the database. Under the hood this uses `Ecto.Multi`, to ensure that transactions (batches of writes) are applied atomically. + +For example, the controller below handles local writes made to a project management app. It constructs a writer instance and pipes it through a series of `Phoenix.Sync.Writer.allow/3` calls. These register functions against `Ecto.Schema`s (in this case `Projects.Project` and `Projects.Issue`): + +```elixir +defmodule MutationController do + use Phoenix.Controller, formats: [:json] + + alias Phoenix.Sync.Writer + alias Phoenix.Sync.Writer.Format + + def mutate(conn, %{"transaction" => transaction} = _params) do + user_id = conn.assigns.user_id + + {:ok, txid, _changes} = + Phoenix.Sync.Writer.new() + |> Phoenix.Sync.Writer.allow( + Projects.Project, + check: reject_invalid_params/2, + load: &Projects.load_for_user(&1, user_id), + validate: &Projects.Project.changeset/2 + ) + |> Phoenix.Sync.Writer.allow( + Projects.Issue, + # Use the sensible defaults: + # validate: Projects.Issue.changeset/2 + # etc. + ) + |> Phoenix.Sync.Writer.apply(transaction, Repo, format: Format.TanstackOptimistic) + + render(conn, :mutations, txid: txid) + end +end +``` + +This facilitates incrementally adding bi-directional sync support to a Phoenix application, re-using your existing auth and schema/validation logic. + +See the `Phoenix.Sync.Writer` module docs for more information. + ## Installation and configuration `Phoenix.Sync` can be used in two modes: @@ -356,7 +402,7 @@ You can also include `replica` (see below) in your static shape definitions: sync "/incomplete-todos", Todos.Todo, where: "completed = false", replica: :full ``` -For anything else more dyanamic, or to use Ecto queries, you should switch from using the `sync` macros in your router to using `sync_render/3` in a controller. +For anything else more dynamic, or to use Ecto queries, you should switch from using the `sync` macros in your router to using `sync_render/3` in a controller. ### Using a keyword list @@ -372,4 +418,3 @@ The available options are: - `replica` (optional). By default Electric will only send primary keys + changed columns on updates. Set `replica: :full` to receive the full row, not just the changed columns. See the [Electric Shapes guide](https://electric-sql.com/docs/guides/shapes) for more information. - diff --git a/config/test.exs b/config/test.exs index 2c52bc2..233ad47 100644 --- a/config/test.exs +++ b/config/test.exs @@ -17,7 +17,8 @@ config :phoenix_sync, Support.Repo, port: 54321, stacktrace: true, show_sensitive_data_on_connection_error: true, - pool_size: 10 + pool_size: 10, + pool: Ecto.Adapters.SQL.Sandbox config :phoenix_sync, Support.ConfigTestRepo, username: "postgres", diff --git a/lib/phoenix/sync.ex b/lib/phoenix/sync.ex index 57b556f..0e1081a 100644 --- a/lib/phoenix/sync.ex +++ b/lib/phoenix/sync.ex @@ -1,4 +1,10 @@ defmodule Phoenix.Sync do + @moduledoc """ + Real-time sync for Postgres-backed Phoenix applications. + + See the [docs](../../README.md) for more information. + """ + alias Electric.Client.ShapeDefinition @shape_keys [:namespace, :where, :columns] @@ -18,7 +24,54 @@ defmodule Phoenix.Sync do | {:columns, String.t()} @type param_overrides :: [param_override()] + @doc """ + Returns the required adapter configuration for your Phoenix Endpoint or + `Plug.Router`. + + ## Phoenix + + Configure your endpoint with the configuration at runtime by passing the + `phoenix_sync` configuration to your endpoint in the `Application.start/2` + callback: + + def start(_type, _args) do + children = [ + # ... + {MyAppWeb.Endpoint, phoenix_sync: Phoenix.Sync.plug_opts()} + ] + end + + ## Plug + + Add the configuration to the Plug opts in your server configuration: + + children = [ + {Bandit, plug: {MyApp.Router, phoenix_sync: Phoenix.Sync.plug_opts()}} + ] + + Your `Plug.Router` must be configured with + [`copy_opts_to_assign`](https://hexdocs.pm/plug/Plug.Builder.html#module-options) and you should `use` the rele + + defmodule MyApp.Router do + use Plug.Router, copy_opts_to_assign: :options + + use Phoenix.Sync.Controller + use Phoenix.Sync.Router + + plug :match + plug :dispatch + + sync "/shapes/todos", Todos.Todo + + get "/shapes/user-todos" do + %{"user_id" => user_id} = conn.params + sync_render(conn, from(t in Todos.Todo, where: t.owner_id == ^user_id) + end + end + """ defdelegate plug_opts(), to: Phoenix.Sync.Application + + @doc false defdelegate plug_opts(config), to: Phoenix.Sync.Application defdelegate client!(), to: Phoenix.Sync.Client, as: :new! diff --git a/lib/phoenix/sync/application.ex b/lib/phoenix/sync/application.ex index 74d16da..f22d832 100644 --- a/lib/phoenix/sync/application.ex +++ b/lib/phoenix/sync/application.ex @@ -1,4 +1,6 @@ defmodule Phoenix.Sync.Application do + @moduledoc false + use Application require Logger @@ -52,51 +54,6 @@ defmodule Phoenix.Sync.Application do } end - @doc """ - Returns the required adapter configuration for your Phoenix Endpoint or - `Plug.Router`. - - ## Phoenix - - Configure your endpoint with the configuration at runtime by passing the - `phoenix_sync` configuration to your endpoint in the `Application.start/2` - callback: - - def start(_type, _args) do - children = [ - # ... - {MyAppWeb.Endpoint, phoenix_sync: Phoenix.Sync.plug_opts()} - ] - end - - ## Plug - - Add the configuration to the Plug opts in your server configuration: - - children = [ - {Bandit, plug: {MyApp.Router, phoenix_sync: Phoenix.Sync.plug_opts()}} - ] - - Your `Plug.Router` must be configured with - [`copy_opts_to_assign`](https://hexdocs.pm/plug/Plug.Builder.html#module-options) and you should `use` the rele - - defmodule MyApp.Router do - use Plug.Router, copy_opts_to_assign: :options - - use Phoenix.Sync.Controller - use Phoenix.Sync.Router - - plug :match - plug :dispatch - - sync "/shapes/todos", Todos.Todo - - get "/shapes/user-todos" do - %{"user_id" => user_id} = conn.params - sync_render(conn, from(t in Todos.Todo, where: t.owner_id == ^user_id) - end - end - """ @spec plug_opts() :: keyword() def plug_opts do config() |> plug_opts() diff --git a/lib/phoenix/sync/client.ex b/lib/phoenix/sync/client.ex index 2e8ae43..52c36b9 100644 --- a/lib/phoenix/sync/client.ex +++ b/lib/phoenix/sync/client.ex @@ -1,4 +1,16 @@ defmodule Phoenix.Sync.Client do + @moduledoc """ + Low level Elixir client. Converts an `Ecto.Query` into an Elixir `Stream`: + + ```elixir + stream = Phoenix.Sync.Client.stream(Todos.Todo) + + stream = + Ecto.Query.from(t in Todos.Todo, where: t.completed == false) + |> Phoenix.Sync.Client.stream() + ``` + """ + alias Phoenix.Sync.PredefinedShape @doc """ diff --git a/lib/phoenix/sync/live_view.ex b/lib/phoenix/sync/live_view.ex index 9564c7f..5208174 100644 --- a/lib/phoenix/sync/live_view.ex +++ b/lib/phoenix/sync/live_view.ex @@ -1,4 +1,24 @@ defmodule Phoenix.Sync.LiveView do + @moduledoc """ + Swap out `Phoenix.LiveView.stream/3` for `Phoenix.Sync.LiveView.sync_stream/4` to + automatically keep a LiveView up-to-date with the state of your Postgres database: + + ```elixir + defmodule MyWeb.MyLive do + use Phoenix.LiveView + import Phoenix.Sync.LiveView + + def mount(_params, _session, socket) do + {:ok, sync_stream(socket, :todos, Todos.Todo)} + end + + def handle_info({:sync, event}, socket) do + {:noreply, sync_stream_update(socket, event)} + end + end + ``` + """ + use Phoenix.Component alias Electric.Client.Message @@ -63,9 +83,7 @@ defmodule Phoenix.Sync.LiveView do {:noreply, Phoenix.Sync.LiveView.sync_stream_update(socket, event)} end - See the docs for - [`Phoenix.LiveView.stream/4`](https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html#stream/4) - for details on using LiveView streams. + See the docs for `Phoenix.LiveView.stream/4` for details on using LiveView streams. ## Lifecycle Events diff --git a/lib/phoenix/sync/plug.ex b/lib/phoenix/sync/plug.ex index 858c235..4e5c5ff 100644 --- a/lib/phoenix/sync/plug.ex +++ b/lib/phoenix/sync/plug.ex @@ -42,10 +42,8 @@ defmodule Phoenix.Sync.Plug do ``` You can add additional authentication/authorization for shapes using - [Phoenix's - pipelines](https://hexdocs.pm/phoenix/Phoenix.Router.html#pipeline/2) or - other [`plug` - calls](https://hexdocs.pm/phoenix/Phoenix.Router.html#plug/2). + [Phoenix's pipelines](https://hexdocs.pm/phoenix/Phoenix.Router.html#pipeline/2) + or other [`plug` calls](https://hexdocs.pm/phoenix/Phoenix.Router.html#plug/2). ## Plug.Router diff --git a/lib/phoenix/sync/writer.ex b/lib/phoenix/sync/writer.ex new file mode 100644 index 0000000..27779a3 --- /dev/null +++ b/lib/phoenix/sync/writer.ex @@ -0,0 +1,1936 @@ +defmodule Phoenix.Sync.Writer do + @moduledoc """ + Provides [write-path sync](https://electric-sql.com/docs/guides/writes) support for + Phoenix- or Plug-based apps. + + Imagine you're building an application on sync. You've used the + [read-path sync utilities](../../../README.md#read-path-sync) to sync data into the + front-end. If the client then changes the data locally, these writes can be batched + up and sent back to the server. + + `#{inspect(__MODULE__)}` provides a principled way of ingesting these local writes + and applying them to Postgres. In a way that works-with and re-uses your existing + authorization logic and your existing `Ecto.Schema`s and `Ecto.Changeset` validation + functions. + + This allows you to build instant, offline-capable applications that work with + [local optimistic state](https://electric-sql.com/docs/guides/writes). + + ## Usage levels ([low](#module-low-level-usage-diy), [mid](#module-mid-level-usage), [high](#module-high-level-usage)) + + You don't need to use `#{inspect(__MODULE__)}` to ingest write operations using Phoenix. + Phoenix already ships with primitives like `Ecto.Multi` and `c:Ecto.Repo.transaction/2`. + However, `#{inspect(__MODULE__)}` provides: + + - a number of convienience functions that simplify ingesting mutation operations + - a high-level pipeline that dries up a lot of common boilerplate and allows you to re-use + your existing `Plug` and `Ecto.Changeset` logic + + ### Low-level usage (DIY) + + If you're comfortable parsing, validating and persisting changes yourself then the + simplest way to use `#{inspect(__MODULE__)}` is to use `txid!/1` within + `c:Ecto.Repo.transaction/2`: + + {:ok, txid} = + MyApp.Repo.transaction(fn -> + # ... save your changes to the database ... + + # Return the transaction id. + #{inspect(__MODULE__)}.txid!(MyApp.Repo) + end) + + This returns the database transaction ID that the changes were applied within. This allows + you to return it to the client, which can then monitor the read-path sync stream to detect + when the transaction syncs through. At which point the client can discard its local + optimistic state. + + A convienient way of doing this is to parse the request data into a list of + `#{inspect(__MODULE__)}.Operation`s using a `#{inspect(__MODULE__)}.Format`. + You can then apply the changes yourself by matching on the operation data: + + {:ok, %Transaction{operations: operations}} = + #{inspect(__MODULE__)}.parse_transaction( + my_encoded_txn, + format: #{inspect(__MODULE__.Format.TanstackOptimistic)} + ) + + {:ok, txid} = + MyApp.Repo.transaction(fn -> + Enum.each(txn.operations, fn + %{operation: :insert, relation: [_, "todos"], change: change} -> + # insert a Todo + %{operation: :update, relation: [_, "todos"], data: data, change: change} -> + # update a Todo + %{operation: :delete, relation: [_, "todos"], data: data} -> + # for example, if you don't want to allow deletes... + raise "invalid delete" + end) + + #{inspect(__MODULE__)}.txid!(MyApp.Repo) + end, timeout: 60_000) + + ### Mid-level usage + + The pattern above is wrapped-up into the more convienient `transact/4` function. + This abstracts the parsing and txid details whilst still allowing you to handle + and apply mutation operations yourself: + + {:ok, txid} = + #{inspect(__MODULE__)}.transact( + my_encoded_txn, + MyApp.Repo, + fn + %{operation: :insert, relation: [_, "todos"], change: change} -> + MyApp.Repo.insert(...) + %{operation: :update, relation: [_, "todos"], data: data, change: change} -> + MyApp.Repo.update(Ecto.Changeset.cast(...)) + %{operation: :delete, relation: [_, "todos"], data: data} -> + # we don't allow deletes... + {:error, "invalid delete"} + end, + format: #{inspect(__MODULE__.Format.TanstackOptimistic)}, + timeout: 60_000 + ) + + However, with larger applications, this flexibility can become tiresome as you end up + repeating boilerplate and defining your own pipeline to authorize, validate and apply + changes with the right error handling and return values. + + ### High-level usage + + To avoid this, `#{inspect(__MODULE__)}` provides a higer level pipeline that dries up + the boilerplate, whilst still allowing flexibility and extensibility. You create an + ingest pipeline by instantiating a `#{inspect(__MODULE__)}` instance and piping into + `allow/3` and `apply/4` calls: + + {:ok, txid, _changes} = + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.allow(MyApp.Todo) + |> #{inspect(__MODULE__)}.allow(MyApp.OtherSchema) + |> #{inspect(__MODULE__)}.apply(transaction, Repo, format: MyApp.MutationFormat) + + Or, instead of `apply/4` you can use seperate calls to `ingest/3` and then `transaction/2`. + This allows you to ingest multiple formats, for example: + + {:ok, txid} = + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.allow(MyApp.Todo) + |> #{inspect(__MODULE__)}.ingest(changes, format: MyApp.MutationFormat) + |> #{inspect(__MODULE__)}.ingest(other_changes, parser: &MyApp.MutationFormat.parse_other/1) + |> #{inspect(__MODULE__)}.ingest(more_changes, parser: {MyApp.MutationFormat, :parse_more, []}) + |> #{inspect(__MODULE__)}.transaction(MyApp.Repo) + + And at any point you can drop down / eject out to the underlying `Ecto.Multi` using + `to_multi/1` or `to_multi/3`: + + multi = + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.allow(MyApp.Todo) + |> #{inspect(__MODULE__)}.to_multi(changes, format: MyApp.MutationFormat) + + # ... do anything you like with the multi ... + + {:ok, changes} = Repo.transaction(multi) + {:ok, txid} = #{inspect(__MODULE__)}.txid(changes) + + ## Controller example + + For example, take a project management app that's using + [@TanStack/optimistic](https://github.com/TanStack/optimistic) to batch up local + optimistic writes and POST them to the `Phoenix.Controller` below: + + defmodule MutationController do + use Phoenix.Controller, formats: [:json] + + alias #{inspect(__MODULE__)} + alias #{inspect(__MODULE__)}.Format + + def mutate(conn, %{"transaction" => transaction} = _params) do + user_id = conn.assigns.user_id + + {:ok, txid, _changes} = + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.allow( + Projects.Project, + check: reject_invalid_params/2, + load: &Projects.load_for_user(&1, user_id), + validate: &Projects.Project.changeset/2 + ) + |> #{inspect(__MODULE__)}.allow( + Projects.Issue, + # Use the sensible defaults: + # validate: Projects.Issue.changeset/2 + # etc. + ) + |> #{inspect(__MODULE__)}.apply( + transaction, + Repo, + format: Format.TanstackOptimistic + ) + + render(conn, :mutations, txid: txid) + end + end + + The controller constructs a `#{inspect(__MODULE__)}` instance and pipes + it through a series of `allow/3` calls, registering functions against + `Ecto.Schema`s (in this case `Projects.Project` and `Projects.Issue`) to + validate and authorize each of these mutation operations before applying + them as a single transaction. + + This controller can become a single, unified entry point for ingesting writes + into your application. You can extend the pipeline with `allow/3` calls for + every schema that you'd like to be able to ingest changes to. + + The [`check`, `load`, `validate`, etc. callbacks](#callbacks) to the allow + function are designed to allow you to re-use your authorization and validation + logic from your existing `Plug` middleware and `Ecto.Changeset` functions. + + > #### Warning {: .warning} + > + > The mutation operations received from clients MUST be considered as **untrusted**. + > + > Though the HTTP operation that uploaded them will have been authenticated and + > authorized by your existing Plug middleware as usual, the actual content of the + > request that is turned into writes against your database needs to be validated + > very carefully against the privileges of the current user. + > + > That's what `#{inspect(__MODULE__)}` is for: specifying which resources can be + > updated and registering functions to authorize and validate the mutation payload. + + ## Transactions + + The `txid` in the return value from `apply/4` and `txid/1` / `txid!/1` allows the + Postgres transaction ID to be returned to the client in the response data. + + This allows clients to monitor the read-path sync stream and match on the + arrival of the same transaction id. When the client receives this transaction id + back through its sync stream, it knows that it can discard the local optimistic + state for that transaction. (This is a more robust way of managing optimistic state + than just matching on instance IDs, as it allows for local changes to be rebased + on concurrent changes to the same date from other users). + + `#{inspect(__MODULE__)}` uses `Ecto.Multi`'s transaction update mechanism + under the hood, which means that either all the operations in a client + transaction are accepted or none are. See `to_multi/1` for how you can hook + into the `Ecto.Multi` after applying your change data. + + > #### Compatibility {: .info} + > + > `#{inspect(__MODULE__)}` can only return transaction ids when connecting to + > a Postgres database (a repo with `adapter: Ecto.Adapters.Postgres`). You can + > use this module for other databases, but the returned txid will be `nil`. + + ## Client Libraries + + `#{inspect(__MODULE__)}` is not coupled to any particular client-side implementation. + See Electric's [write pattern guides and example code](https://electric-sql.com/docs/guides/writes) + for implementation strategies and examples. + + Instead, `#{inspect(__MODULE__)}` provides an adapter pattern where you can register + a `format` adapter or `parser` function to parse the expected payload format from a client side library + into the struct that `#{inspect(__MODULE__)}` expects. + + The currently supported format adapters are: + + - [TanStack/optimistic](https://github.com/TanStack/optimistic) "A library + for creating fast optimistic updates with flexible backend support that pairs + seamlessly with sync engines" + + Integration: + + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.ingest(mutation_data, format: #{inspect(__MODULE__.Format.TanstackOptimistic)}) + |> #{inspect(__MODULE__)}.transaction(Repo) + + ## Usage + + Much as every controller action must be authenticated, authorized and validated + to prevent users writing invalid data or data that they do not have permission + to modify, mutations **MUST** be validated for both correctness (are the given + values valid?) and permissions (is the current user allowed to apply the given + mutation?). + + This dual verification -- of data and permissions -- is performed by a pipeline + of application-defined callbacks for every model that you allow writes to: + + - [`check`](#module-check) - a function that performs a "pre-flight" + sanity check of the user-provided data in the mutation; this should just + validate the data and not usually hit the database; checks are performed on + all operations in a transaction before proceeding to the next steps in the + pipeline; this allows for fast rejection of invalid data before performing + more expensive operations + - [`load`](#module-load) - a function that takes the original data + and returns the existing model from the database, if it exists, for an update + or delete operation + - [`validate`](#module-validate) - create and validate an `Ecto.Changeset` + from the source data and mutation changes; this is intended to be compatible with + using existing schema changeset functions; note that, as per any changeset function, + the validate function can perform both authorization and validation + - [`pre_apply` and `post_apply`](#module-pre_apply-and-post_apply) - add + arbitrary `Ecto.Multi` operations to the transaction based on the current operation + + See `apply/4` and the [Callbacks](#module-callbacks) for how the transaction is + processed internally and how best to use these callback functions to express your + app's authorization and validation requirements. + + Calling `new/0` creates an empty writer configuration with the given mutation + parser. But this alone does not permit any mutations. In order to allow writes + from clients you must call `allow/3` with a schema module and some callback functions. + + # create an empty writer configuration + writer = #{inspect(__MODULE__)}.new() + + # allow writes to the `Todos.Todo` table + # using `Todos.check_mutation/1` to validate mutation data before + # touching the database + writer = #{inspect(__MODULE__)}.allow(writer, Todos.Todo, check: &Todos.check_mutation/1) + + If the table name on the client differs from the Postgres table, then you can + add a `table` option that specifies the client table name that this `allow/3` + call applies to: + + # `client_todos` is the name of the `todos` table on the clients + writer = + #{inspect(__MODULE__)}.allow( + writer, + Todos.Todo, + validate: &Todos.validate_mutation/2, + table: "client_todos" + ) + + ## Callbacks + + ### Check + + The `check` option should be a 1-arity function whose purpose is to test + the mutation data against the authorization rules for the application and + model before attempting any database access. + + If the changes are valid then it should return `:ok` or `{:error, reason}` if + they're invalid. + + If any of the changes fail the auth test, then the entire transaction will be + rejected. + + This is the first line of defence against malicious writes as it provides a + quick check of the data from the clients before any reads or writes to the + database. + + Note that the writer pipeline checks all the operations before proceeding to + load, validate and apply each operation in turn. + + def check(%#{inspect(__MODULE__.Operation)}{} = operation) do + # :ok or {:error, "..."} + end + + ### Load + + The `load` callback takes the `data` in `update` or `delete` mutations (i.e.: + the original data before changes), and uses it to retrieve the original + `Ecto.Struct` model from the database. + + It can be a 1- or 2-arity function. The 1-arity version receives just the + `data` parameters. The 2-arity version receives the `Ecto.Repo` that the + transaction is being applied to and the `data` parameters. + + # 1-arity version + def load(%{"column" => "value"} = data) do + # Repo.get(...) + end + + # 2-arity version + def load(repo, %{"column" => "value"} = data) do + # repo.get(...) + end + + If not provided defaults to using `c:Ecto.Repo.get_by/3` using the primary + key(s) defined on the model. + + For `insert` operations this load function is not used. Instead, the original + struct is created by calling the `__struct__/0` function on the `Ecto.Schema` + module. + + ### Validate + + The `validate` callback performs the usual role of a changeset function: to + validate the changes against the model's data constraints using the functions + in `Ecto.Changeset`. + + It should return an `Ecto.Changeset` instance (or possibly the original + schema struct in the case of `delete`s). If any of the transaction's + changeset's are marked as invalid, then the entire transaction is aborted. + + If not specified, the `validate` function is defaulted to the schema model's + standard `changeset/2` function if available. + + The callback can be either a 2- or 3-arity function. + + The 2-arity version will receive the `Ecto.Schema` struct returned from the + `load` function and the mutation changes. The 3-arity version will receive + the `load`ed struct, the changes and the operation. + + # 2-arity version + def changeset(%Todo{} = data, %{} = changes) do + data + |> Ecto.Changeset.cast(changes, [:title, :completed]) + |> Ecto.Changeset.validate_required(changes, [:title]) + end + + # 3-arity version + def changeset(%Todo{} = data, %{} = changes, operation) + when operation in [:insert, :update, :delete] do + # ... + end + + #### Primary keys + + Whether the params for insert operations contains a value for the new primary + key is application specific. It's certainly not required if you have declared your + `Ecto.Schema` model with a primary key set to `autogenerate: true`. + + It's worth noting that if you are accepting primary key values as part of + your `insert` changes, then you should use UUID primary keys for your models + to prevent conflicts. + + ### pre_apply and post_apply + + These callbacks, run before or after the actual `insert`, `update` or + `delete` operation allow for the addition of side effects to the transaction. + + They are passed an empty `Ecto.Multi` struct and which is then + [merged](`Ecto.Multi.merge/2`) into the writer's transaction. + + They also allow for more validation/authorization steps as any operation + within the callback that returns an "invalid" operation will abort the entire + transaction. + + def pre_or_post_apply(%Ecto.Multi{} = multi, %Ecto.Changeset{} = change, %#{inspect(__MODULE__)}.Context{} = context) do + multi + # add some side-effects + # |> Ecto.Multi.run(#{inspect(__MODULE__)}.operation_name(context, :image), fn _changes -> + # with :ok <- File.write(image.name, image.contents) do + # {:ok, nil} + # end + # end) + # + # validate the current transaction and abort using an {:error, value} tuple + # |> Ecto.Multi.run(#{inspect(__MODULE__)}.operation_name(context, :my_validation), fn _changes -> + # {:error, "reject entire transaction"} + # end) + end + + Note the use of `operation_name/2` when adding operations. Every name in the + final `Ecto.Multi` struct must be unique, `operation_name/2` generates names + that are guaranteed to be unique to the current operation and callback. + + ### Per-operation callbacks + + If you want to re-use an existing function on a per-operation basis, then in + your write configuration you can define both top-level and per operation + callbacks: + + #{inspect(__MODULE__)}.allow( + Todos.Todo, + load: &Todos.fetch_for_user(&1, user_id), + check: &Todos.check_mutation(&1, &2, user_id), + validate: &Todos.Todo.changeset/2, + update: [ + # for inserts and deletes, use &Todos.Todo.changeset/2 but for updates + # use this function + validate: &Todos.Todo.update_changeset/2, + pre_apply: &Todos.pre_apply_update_todo/3 + ], + insert: [ + # optional validate, pre_apply and post_apply + # overrides for insert operations + ], + delete: [ + # optional validate, pre_apply and post_apply + # overrides for delete operations + ], + ) + + ## End-to-end usage + + The combination of the `check`, `load`, `validate`, `pre_apply` and + `post_apply` functions can be composed to provide strong guarantees of + validity. + + The aim is to provide an interface as similar to that used in controller + functions as possible. + + Here we show an example controller module that allows updating of `Todo`s via + a standard `HTTP PUT` update handler and also via `HTTP POST`s to the + `mutation` handler which applies optimistic writes via this module. + + We use the `load` function to validate the ownership of the original `Todo` + by looking up the data using both the `id` and the `user_id`. This makes it + impossible for user `a` to update todos belonging to user `b`. + + defmodule MyController do + use Phoenix.Controller, formats: [:html, :json] + + alias #{inspect(__MODULE__)} + + # The standard HTTP PUT update handler + def update(conn, %{"todo" => todo_params}) do + user_id = conn.assigns.user_id + + with {:ok, todo} <- fetch_for_user(params, user_id), + {:ok, params} <- validate_params(todo, todo_params, user_id), + {:ok, updated_todo} <- Todos.update(todo, params) do + redirect(conn, to: ~p"/todos/\#{updated_todo.id}") + end + end + + # The HTTP POST mutations handler which receives JSON data + def mutations(conn, %{"transaction" => transaction} = _params) do + user_id = conn.assigns.user_id + + {:ok, txid, _changes} = + Writer.new() + |> Writer.allow( + Todos.Todo, + check: &validate_mutation(&1, user_id), + load: &fetch_for_user(&1, user_id), + ) + |> Writer.apply(transaction, Repo, format: Writer.Format.TanstackOptimistic) + + render(conn, :mutations, txid: txid) + end + + # Included here for completeness but in a real app would be a + # public function in the Todos context. + # Because we're validating the ownership of the Todo here we add an + # extra layer of auth checks, preventing one user from modifying + # the Todos of another. + defp fetch_for_user(%{"id" => id}, user_id) do + from(t in Todos.Todo, where: t.id == ^id and t.user_id == ^user_id) + |> Repo.one() + end + + defp validate_mutation(%Writer.Operation{} = op, user_id) do + with :ok <- validate_params(op.data, user_id) do + validate_params(op.changes, user_id) + end + end + + defp validate_params(%{"user_id" => user_id}, user_id), do: :ok + defp validate_params(%{} = _params, _user_id), do: {:error, "invalid user_id"} + end + + Because `Phoenix.Sync.Write` leverages `Ecto.Multi` to do the work of + applying changes and managing errors, you're also free to extend the actions + that are performed with every transaction using `pre_apply` and `post_apply` + callbacks configured per-table or per-table per-action (insert, update, + delete). See `allow/3` for more information on the configuration options + for each table. + + The result of `to_multi/1` or `to_multi/3` is an `Ecto.Multi` instance so you can also just + append operations using the normal `Ecto.Multi` functions: + + {:ok, txid, _changes} = + Writer.new() + |> Writer.allow(Todo, ...) + |> Writer.to_multi(transaction, parser: &my_transaction_parser/1) + |> Ecto.Multi.insert(:my_action, %Event{}) + |> Writer.transaction(Repo) + """ + + import Kernel, except: [apply: 2, apply: 3] + + defmodule Context do + @moduledoc """ + Provides context within callback functions. + + ## Fields + + * `index` - the 0-indexed position of the current operation within the transaction + * `changes` - the current `Ecto.Multi` changes so far + * `operation` - the current `#{inspect(__MODULE__.Operation)}` + * `callback` - the name of the current callback, `:pre_apply` or `:post_apply` + * `schema` - the `Ecto.Schema` module associated with the current operation + """ + + @derive {Inspect, except: [:writer, :action]} + + defstruct [:index, :writer, :changes, :operation, :callback, :schema, :action, :pk] + end + + defmodule Error do + @moduledoc false + + defexception [:message, :operation] + + @impl true + def message(e) do + "Operation #{inspect(e.operation)} failed: #{e.message}" + end + end + + require Record + require Logger + + Record.defrecordp(:opkey, schema: nil, operation: nil, index: 0, pk: nil) + + alias __MODULE__.Operation + alias __MODULE__.Transaction + alias __MODULE__.Format + + @type pre_post_func() :: (Ecto.Multi.t(), Ecto.Changeset.t(), context() -> Ecto.Multi.t()) + + @operation_options [ + validate: [ + type: {:fun, 2}, + doc: """ + A 2-arity function that returns a changeset for the given mutation data. + + Arguments: + + - `schema` the original `Ecto.Schema` model returned from the `load` function + - `changes` a map of changes from the mutation operation + + Return value: + + - an `Ecto.Changeset` + """, + type_spec: quote(do: (Ecto.Schema.t(), data() -> Ecto.Changeset.t())) + ], + pre_apply: [ + type: {:fun, 3}, + doc: """ + An optional callback that allows for the pre-pending of operations to the `Ecto.Multi`. + + Arguments and return value as per the global `pre_apply` callback. + """, + type_spec: quote(do: pre_post_func()), + type_doc: "`t:pre_post_func/0`" + ], + post_apply: [ + type: {:fun, 3}, + doc: """ + An optional callback that allows for the appending of operations to the `Ecto.Multi`. + + Arguments and return value as per the global `post_apply` callback. + """, + type_spec: quote(do: pre_post_func()), + type_doc: "`t:pre_post_func/0`" + ] + ] + @operation_options_schema NimbleOptions.new!(@operation_options) + + @type data() :: %{binary() => any()} + @type mutation() :: %{required(binary()) => any()} + @type operation() :: :insert | :update | :delete + @operations [:insert, :update, :delete] + @type txid() :: Transaction.id() + @type context() :: %Context{ + index: non_neg_integer(), + changes: Ecto.Mult.changes(), + schema: Ecto.Schema.t(), + operation: :insert | :update | :delete, + callback: :load | :validate | :pre_apply | :post_apply + } + + @type operation_opts() :: unquote([NimbleOptions.option_typespec(@operation_options_schema)]) + + @operation_schema [ + type: :keyword_list, + keys: @operation_options, + doc: NimbleOptions.docs(NimbleOptions.new!(@operation_options)), + type_spec: quote(do: operation_opts()), + type_doc: "`t:operation_opts/0`" + ] + + @parse_schema_options [ + format: [ + type: :atom, + doc: """ + A module implementing the `#{inspect(__MODULE__.Format)}` + behaviour. + + See `#{inspect(__MODULE__.Format)}`. + """, + type_spec: quote(do: Format.t()), + type_doc: "[`Format.t()`](`t:#{inspect(Format)}.t/0`)" + ], + parser: [ + type: {:or, [{:fun, 1}, :mfa]}, + doc: """ + A function that parses some input data and returns a + [`%Transaction{}`](`#{inspect(__MODULE__.Transaction)}`) struct or an error. + See `c:#{inspect(__MODULE__.Format)}.parse_transaction/1`. + """, + type_doc: "`#{inspect(Format)}.parser_fun() | mfa()`", + type_spec: quote(do: Format.parser_fun()) + ] + ] + @parse_schema NimbleOptions.new!(@parse_schema_options) + + @allow_schema NimbleOptions.new!( + table: [ + type: {:or, [:string, {:list, :string}]}, + doc: """ + Override the table name of the `Ecto.Schema` struct to + allow for mapping between table names on the client and within Postgres. + + If you pass just a table name, then any schema prefix in the client tables is ignored, so + + Writer.allow(Todos, table: "todos") + + will match client operations for `["public", "todos"]` and `["application", "todos"]` etc. + + If you provide a 2-element list then the mapping will be exact and only + client relations matching the full `[schema, table]` pair will match the + given schema. + + Writer.allow(Todos, table: ["public", "todos"]) + + Will match client operations for `["public", "todos"]` but + **not** `["application", "todos"]` etc. + + Defaults to `Model.__schema__(:source)`, or if the Ecto schema + module has specified a `namespace` `[Model.__schema__(:prefix), + Model.__schema__(:source)]`. + """, + type_doc: "`String.t() | [String.t(), ...]`", + type_spec: quote(do: String.t() | [String.t(), ...]) + ], + accept: [ + type: {:list, {:in, @operations}}, + doc: """ + A list of actions to accept. + + A transaction containing an operation not in the accept list will be rejected. + + Defaults to accepting all operations, `#{inspect(@operations)}`. + """, + type_spec: quote(do: [operation(), ...]) + ], + check: [ + type: {:fun, 1}, + doc: """ + A function that validates every %#{inspect(__MODULE__.Operation)}{} in the transaction for correctness. + + This is run before any database access is performed and so provides an + efficient way to prevent malicious writes without hitting your database. + + Defaults to a function that allows all operations: `fn _ -> :ok end`. + """, + type_spec: quote(do: (Operation.t() -> :ok | {:error, term()})) + ], + before_all: [ + type: {:fun, 1}, + doc: """ + Run only once (per transaction) after the parsing and `check` callback have + completed and before `load` and `validate` functions run. + + Useful for pre-loading data from the database that can be shared across + all operation callbacks for all the mutations. + + Arguments: + + - `multi` an `Ecto.Multi` struct + + Return value: + + - `Ecto.Multi` struct with associated data + + Defaults to no callback. + """, + type_spec: quote(do: (Ecto.Multi.t() -> Ecto.Multi.t())) + ], + load: [ + type: {:or, [{:fun, 1}, {:fun, 2}]}, + doc: """ + A 1- or 2-arity function that accepts either the mutation + operation's data or an `Ecto.Repo` instance and the mutation data and + returns the original row from the database. + + Arguments: + + - `repo` the `Ecto.Repo` instance passed to `apply/4` or `transaction/3` + - `data` the original operation data + + Valid return values are: + + - `struct()` - an `Ecto.Schema` struct, that must match the + module passed to `allow/3` + - `{:ok, struct()}` - as above but wrapped in an `:ok` tuple + - `nil` - if no row matches the search criteria, or + - `{:error, String.t()}` - as `nil` but with a custom error string + + A return value of `nil` or `{:error, reason}` will abort the transaction. + + This function is only used for updates or deletes. For + inserts, the `__struct__/0` function defined by `Ecto.Schema` is used to + create an empty schema struct. + + ## Examples + + # load from a known Repo + load: fn %{"id" => id} -> MyApp.Repo.get(Todos.Todo, id) + + # load from the repo passed to `#{__MODULE__}.transaction/2` + load: fn repo, %{"id" => id} -> repo.get(Todos.Todo, id) + + If not provided defaults to `c:Ecto.Repo.get_by/3` using the + table's schema module and its primary keys. + """, + type_spec: + quote( + do: + (Ecto.Repo.t(), data() -> + Ecto.Schema.t() | {:ok, Ecto.Schema.t()} | nil | {:error, String.t()}) + | (data() -> + Ecto.Schema.t() + | {:ok, Ecto.Schema.t()} + | nil + | {:error, String.t()}) + ) + ], + validate: [ + type: {:or, [{:fun, 2}, {:fun, 3}]}, + doc: """ + a 2- or 3-arity function that returns an `Ecto.Changeset` for a given mutation. + + ### Callback params + + - `data` an Ecto.Schema struct matching the one used when + calling `allow/2` returned from the `load` function. + - `changes` a map of changes to apply to the `data`. + - `operation` (for 3-arity callbacks only) the operation + action, one of `:insert`, `:update` or `:delete` + + At absolute minimum, this should call + `Ecto.Changeset.cast/3` to validate the proposed data: + + def my_changeset(data, changes, _operation) do + Ecto.Changeset.cast(data, changes, @permitted_columns) + end + + Defaults to the given model's `changeset/2` function if + defined, raises if no changeset function can be found. + """, + type_spec: + quote( + do: + (Ecto.Schema.t(), data() -> Ecto.Changeset.t()) + | (Ecto.Schema.t(), data(), operation() -> Ecto.Changeset.t()) + ) + ], + pre_apply: [ + type: {:fun, 3}, + doc: """ + an optional callback that allows for the pre-pending of + operations to the `Ecto.Multi` representing a mutation transaction. + + If should be a 3-arity function. + + ### Arguments + + - `multi` - an empty `%Ecto.Multi{}` instance that you should apply + your actions to + - `changeset` - the changeset representing the individual mutation operation + - `context` - the current change [context](`#{__MODULE__.Context}`) + + The result should be the `Ecto.Multi` instance which will be + [merged](`Ecto.Multi.merge/2`) with the one representing the mutation + operation. + + Because every action in an `Ecto.Multi` must have a unique + key, we advise using the `operation_name/2` function to generate a unique + operation name based on the `context`. + + def pre_apply(multi, changeset, context) do + name = #{inspect(__MODULE__)}.operation_name(context, :event_insert) + Ecto.Multi.insert(multi, name, %Event{todo_id: id}) + end + + Defaults to no `nil`. + """, + type_spec: quote(do: pre_post_func()), + type_doc: "`t:pre_post_func/0`" + ], + post_apply: [ + type: {:fun, 3}, + doc: """ + an optional callback function that allows for the + appending of operations to the `Ecto.Multi` representing a mutation + transaction. + + See the docs for `:pre_apply` for the function signature and arguments. + + Defaults to no `nil`. + """, + type_spec: quote(do: pre_post_func()), + type_doc: "`t:pre_post_func/0`" + ], + insert: + Keyword.put(@operation_schema, :doc, """ + Callbacks for validating and modifying `insert` operations. + + Accepts definitions for the `validate`, `pre_apply` and + `post_apply` functions for `insert` operations that will override the + top-level equivalents. + + See the documentation for `allow/3`. + + The only difference with these callback functions is that + the `action` parameter is redundant and therefore not passed. + + Defaults to `[]`, using the top-level functions for all operations. + """), + update: + Keyword.put(@operation_schema, :doc, """ + Callbacks for validating and modifying `update` operations. + See the documentation for `insert`. + """), + delete: + Keyword.update!( + @operation_schema, + :keys, + &Keyword.put(&1, :validate, type: {:or, [{:fun, 1}, {:fun, 2}]}) + ) + |> Keyword.put(:doc, """ + Callbacks for validating and modifying `delete` operations. + See the documentation for `insert`. + """) + ) + + defstruct ingest: [], mappings: %{} + + @type allow_opts() :: [unquote(NimbleOptions.option_typespec(@allow_schema))] + @type parse_opts() :: [unquote(NimbleOptions.option_typespec(@parse_schema))] + @type schema_config() :: %{required(atom()) => term()} + @type ingest_change() :: {Format.t(), Format.parser_fun(), Format.transaction_data()} + @type repo_transaction_opts() :: keyword() + @type transact_opts() :: [parse_opts() | repo_transaction_opts()] + + @type t() :: %__MODULE__{ + ingest: [ingest_change()], + mappings: %{(binary() | [binary(), ...]) => schema_config()} + } + + @doc """ + Create a new empty writer. + + Empty writers will reject writes to any tables. You should configure writes + to the permitted tables by calling `allow/3`. + """ + @spec new() :: t() + def new do + %__MODULE__{} + end + + @doc """ + Allow writes to the given `Ecto.Schema`. + + Only tables specified in calls to `allow/3` will be accepted by later calls + to `transaction/3`. Any changes to tables not explicitly defined by `allow/3` calls + will be rejected and cause the entire transaction to be rejected. + + ## Examples + + # allow writes to the Todo table using + # `MyApp.Todos.Todo.check_mutation/1` to validate operations + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.allow( + MyApp.Todos.Todo, + check: &MyApp.Todos.check_mutation/1 + ) + + # A more complex configuration adding an `post_apply` callback to inserts + # and using a custom query to load the original database value. + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.allow( + MyApp.Todos..Todo, + load: &MyApp.Todos.get_for_mutation/1, + check: &MyApp.Todos.check_mutation/1, + insert: [ + post_apply: &MyApp.Todos.post_apply_insert_mutation/3 + ] + ) + + ## Supported options + + #{NimbleOptions.docs(@allow_schema)} + + """ + @spec allow(t(), module(), allow_opts()) :: t() + def allow(writer, schema, opts \\ []) + + def allow(%__MODULE__{} = writer, schema, opts) when is_atom(schema) do + {schema, table, pks} = validate_schema!(schema) + + config = NimbleOptions.validate!(opts, @allow_schema) + + key = config[:table] || table + load_fun = load_fun(schema, pks, opts) + check_fun = check_fun(opts) + + accept = Keyword.get(config, :accept, @operations) |> MapSet.new() + + table_config = %{ + schema: schema, + table: table, + pks: pks, + accept: accept, + check: check_fun + } + + table_config = + Enum.reduce(@operations, table_config, fn action, table_config -> + Map.put( + table_config, + action, + action_config(schema, config, action, load: load_fun, table: key, pks: pks) + ) + end) + + Map.update!(writer, :mappings, &Map.put(&1, key, table_config)) + end + + defp validate_schema!(module) do + if !Code.ensure_loaded?(module), do: raise(ArgumentError, message: "Unknown module #{module}") + + if !(function_exported?(module, :__changeset__, 0) && + function_exported?(module, :__schema__, 1)), + do: raise(ArgumentError, message: "Not an Ecto.Schema module #{module}") + + table = + if prefix = module.__schema__(:prefix), + do: [prefix, module.__schema__(:source)], + else: module.__schema__(:source) + + {module, table, module.__schema__(:primary_key)} + end + + defp action_config(schema, config, action, extra) do + validate_fun = + get_in(config, [action, :validate]) || config[:validate] || default_changeset!(schema) || + raise(ArgumentError, message: "No validate/3 or validate/2 defined for #{action}s") + + # nil-hooks are just ignored + pre_apply_fun = get_in(config, [action, :pre_apply]) || config[:pre_apply] + post_apply_fun = get_in(config, [action, :post_apply]) || config[:post_apply] + + before_all_fun = config[:before_all] + + Map.merge( + Map.new(extra), + %{ + schema: schema, + before_all: before_all_fun, + validate: validate_fun, + pre_apply: pre_apply_fun, + post_apply: post_apply_fun + } + ) + end + + defp load_fun(schema, pks, config) do + load_fun = + case config[:load] do + nil -> + fn repo, change -> + key = Map.new(pks, fn col -> {col, Map.fetch!(change, to_string(col))} end) + repo.get_by(schema, key) + end + + fun when is_function(fun, 1) -> + fn _repo, change -> + fun.(change) + end + + fun when is_function(fun, 2) -> + fun + + _invalid -> + raise(ArgumentError, message: "`load` should be a 1- or 2-arity function") + end + + fn + _repo, :insert, _change, _ctx -> + schema.__struct__() + + repo, _action, change, ctx -> + # look for the data in the changes so far. uses the multi changes from + # append_changeset_result/3 + pk = + Map.new(pks, fn col -> + {to_string(col), change |> Map.fetch!(to_string(col)) |> to_string()} + end) + + Enum.filter(ctx.changes, fn + {opkey(schema: ^schema, pk: ^pk), _value} = change -> change + _ -> nil + end) + |> Enum.max_by(fn {opkey(index: index), _} -> index end, &>=/2, fn -> nil end) + |> case do + nil -> + # not in changeset, so call load + load_fun.(repo, change) + + {opkey(operation: :delete), _} -> + # last op on this key was a delete + nil + + {_opkey, value} -> + {:ok, value} + end + end + end + + defp check_fun(opts) do + case opts[:check] do + nil -> + fn _ -> :ok end + + fun1 when is_function(fun1, 1) -> + fun1 + + fun when is_function(fun) -> + info = Function.info(fun) + + raise ArgumentError, + message: + "Invalid check function. Expected a 1-arity function but got arity #{info[:arity]}" + + invalid -> + raise ArgumentError, + message: + "Invalid check function. Expected a 1-arity function but got #{inspect(invalid)}" + end + end + + defp default_changeset!(schema) do + cond do + function_exported?(schema, :changeset, 3) -> &schema.changeset/3 + function_exported?(schema, :changeset, 2) -> &schema.changeset/2 + true -> nil + end + end + + @doc """ + Add the given changes to the operations that will be applied within a `transaction/3`. + + Examples: + + {:ok, txid} = + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.allow(MyApp.Todo) + |> #{inspect(__MODULE__)}.ingest(changes, format: MyApp.MutationFormat) + |> #{inspect(__MODULE__)}.ingest(other_changes, parser: &MyApp.MutationFormat.parse_other/1) + |> #{inspect(__MODULE__)}.ingest(more_changes, parser: {MyApp.MutationFormat, :parse_more, []}) + |> #{inspect(__MODULE__)}.transaction(MyApp.Repo) + + Supported options: + + #{NimbleOptions.docs(@parse_schema)} + """ + @spec ingest(t(), Format.transaction_data(), parse_opts()) :: t() + def ingest(writer, changes, opts) do + case validate_ingest_opts(opts) do + {:ok, format, parser_fun} -> + %{writer | ingest: [{format, parser_fun, changes} | writer.ingest]} + + {:error, message} -> + raise Error, message: message + end + end + + @doc """ + Ingest and write changes to the given repo in a single call. + + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.apply(changes, Repo, parser: &MyFormat.parse/1) + + is equivalent to: + + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.ingest(changes, parser: &MyFormat.parse/1) + |> #{inspect(__MODULE__)}.transaction(Repo) + """ + @spec apply(t(), Format.transaction_data(), Ecto.Repo.t(), transact_opts()) :: + {:ok, txid(), Ecto.Multi.changes()} | Ecto.Multi.failure() + def apply(%__MODULE__{} = writer, changes, repo, opts) + when is_atom(repo) and is_list(changes) do + {writer_opts, txn_opts} = split_writer_txn_opts(opts) + + writer + |> ingest(changes, writer_opts) + |> transaction(repo, txn_opts) + end + + @writer_option_keys Keyword.keys(@parse_schema_options) + + defp split_writer_txn_opts(opts) do + Keyword.split(opts, @writer_option_keys) + end + + @doc """ + Given a writer configuration created using `allow/3` translate the list of + mutations into an `Ecto.Multi` operation. + + Example: + + %Ecto.Multi{} = multi = + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.allow(MyApp.Todos.Todo, check: &my_check_function/1) + |> #{inspect(__MODULE__)}.allow(MyApp.Options.Option, check: &my_check_function/1) + |> #{inspect(__MODULE__)}.ingest(changes, format: #{inspect(__MODULE__.Format.TanstackOptimistic)}) + |> #{inspect(__MODULE__)}.to_multi() + + If you want to add extra operations to the mutation transaction, beyond those + applied by any `pre_apply` or `post_apply` callbacks in your mutation config then use + the functions in `Ecto.Multi` to do those as normal. + + Use `transaction/3` to apply the changes to the database and return the + transaction id. + + `to_multi/1` builds an `Ecto.Multi` struct containing the operations required to + write the mutation operations to the database. + + The order of operation is: + + ### 1. Parse + + The transaction data is parsed, using either the `format` or the `parser` function + supplied in `ingest/3`. + + ### 2. Check + + The user input data in each operation in the transaction is tested for validity + via the `check` function. + + At this point no database operations have taken place. Errors at the parse or + `check` stage result in an early exit. The purpose of the `check` callback is + sanity check the incoming mutation data against basic sanitization rules, much + as you would do with `Plug` middleware and controller params pattern matching. + + Now that we have a list of validated mutation operations, the next step is: + + ### 3. Before-all + + Perform any actions defined in the `before_all` callback. + + This only happens once per transaction, the first time the model owning the + callback is included in the operation list. + + The following actions happen once per operation in the transaction: + + ### 4. Load + + The `load` function is called to retrieve the source row from the database + (for `update` and `delete` operations), or the schema's `__struct__/0` + function is called to instantiate an empty struct (`insert`). + + ### 5. Validate + + The `validate` function is called with the result of the `load` function + and the operation's changes. + + ### 6. Pre-apply + + The `pre_apply` callback is called with a `multi` instance, the result of the + `validate` function and the current `Context`. The result is + [merged](`Ecto.Multi.merge/2`) into the transaction's ongoing `Ecto.Multi`. + + ### 7. Apply + + The actual operation is applied to the database using one of + `Ecto.Multi.insert/4`, `Ecto.Multi.update/4` or `Ecto.Multi.delete/4`, and + + ### 8. Post-apply + + Finally the `post_apply` callback is called. + + Any error in any of these stages will abort the entire transaction and leave + your database untouched. + """ + @spec to_multi(t()) :: Ecto.Multi.t() + def to_multi(%__MODULE__{} = writer) do + # Want to return a multi here but have that multi fail without contacting + # the db if any of the check calls fail. + # + # Looking at `Ecto.Multi.__apply__/4` it first checks for invalid + # operations before doing anything. So i can just add an error to a blank + # multi and return that and the transaction step will fail before touching + # the repo. + writer.ingest + |> Enum.reverse() + |> Enum.reduce_while(start_multi(), fn {_format, parser_fun, changes}, multi -> + with {:ok, %Transaction{} = txn} <- parse_check(writer, parser_fun, changes) do + {:cont, Enum.reduce(txn.operations, multi, &append_multi(&2, &1, writer))} + else + {step, {:error, error}} -> + {:halt, Ecto.Multi.error(Ecto.Multi.new(), step, error)} + end + end) + end + + @doc """ + Ingest changes and map them into an `Ecto.Multi` instance ready to apply + using `#{inspect(__MODULE__)}.transaction/3` or `c:Ecto.Repo.transaction/2`. + + This is a wrapper around `ingest/3` and `to_multi/1`. + + Example: + + %Ecto.Multi{} = multi = + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.allow(MyApp.Todos.Todo, check: &my_check_function/1) + |> #{inspect(__MODULE__)}.allow(MyApp.Options.Option, check: &my_check_function/1) + |> #{inspect(__MODULE__)}.to_multi(changes, format: #{inspect(__MODULE__.Format.TanstackOptimistic)}) + + """ + @spec to_multi(t(), Format.transaction_data(), parse_opts()) :: Ecto.Multi.t() + def to_multi(%__MODULE__{} = writer, changes, opts) do + writer + |> ingest(changes, opts) + |> to_multi() + end + + defp parse_check(writer, parser_fun, changes) do + with {:parse, {:ok, %Transaction{} = txn}} <- {:parse, parser_fun.(changes)}, + {:check, :ok} <- {:check, check_transaction(writer, txn)} do + {:ok, txn} + end + end + + @doc """ + Use the parser configured in the given [`Writer`](`#{inspect(__MODULE__)}`) + instance to decode the given transaction data. + + This can be used to handle mutation operations explicitly: + + {:ok, txn} = #{inspect(__MODULE__)}.parse_transaction(my_json_tx_data, format: #{inspect(__MODULE__.Format.TanstackOptimistic)}) + + {:ok, txid} = + Repo.transaction(fn -> + Enum.each(txn.operations, fn operation -> + # do something wih the given operation + # raise if something is wrong... + end) + # return the transaction id + #{inspect(__MODULE__)}.txid!(Repo) + end) + """ + @spec parse_transaction(Format.transaction_data(), parse_opts()) :: + {:ok, Transaction.t()} | {:error, term()} + def parse_transaction(changes, opts) do + with {:ok, opts} <- NimbleOptions.validate(opts, @parse_schema), + {:ok, _format, parser_fun} <- validate_ingest_opts(opts) do + parser_fun.(changes) + end + end + + defp validate_ingest_opts(opts) do + with {:ok, config} <- NimbleOptions.validate(opts, @parse_schema), + format = Keyword.get(config, :format), + parser = Keyword.get(config, :parser), + {:ok, parser_func} <- parser_fun(format, parser) do + {:ok, format, parser_func} + end + end + + defp parser_fun(format, parser) do + case format_parser(format) do + parser when is_function(parser, 1) -> + {:ok, parser} + + nil -> + case parser do + nil -> + {:error, "no valid format or parser"} + + parser when is_function(parser, 1) -> + {:ok, parser} + + {m, f, a} when is_atom(m) and is_atom(f) and is_list(a) -> + {:ok, fn changes -> Kernel.apply(m, f, [changes | a]) end} + end + + {:error, _} = error -> + error + end + end + + defp format_parser(nil), do: nil + + defp format_parser(format) when is_atom(format) do + if Code.ensure_loaded?(format) && function_exported?(format, :parse_transaction, 1) do + Function.capture(format, :parse_transaction, 1) + else + {:error, + "#{inspect(format)} does not implement the #{inspect(__MODULE__.Format)} behaviour"} + end + end + + defp check_transaction(%__MODULE__{} = writer, %Transaction{} = txn) do + Enum.reduce_while(txn.operations, :ok, fn operation, :ok -> + case mutation_actions(operation, writer) do + {:ok, actions} -> + %{check: check, accept: accept} = actions + + if MapSet.member?(accept, operation.operation) do + case check.(operation) do + :ok -> {:cont, :ok} + {:error, reason} -> {:halt, {:error, %Error{message: reason, operation: operation}}} + end + else + {:halt, + {:error, + %Error{ + message: + "Action #{inspect(operation.operation)} not in :accept list: #{MapSet.to_list(accept) |> inspect()}", + operation: operation + }}} + end + + {:error, reason} -> + {:halt, {:error, reason}} + end + end) + end + + @txid_name {:__phoenix_sync__, :txid} + @txid_query "SELECT txid_current() as txid" + + defp start_multi do + Ecto.Multi.new() + |> txid_step() + end + + defp txid_step(multi \\ Ecto.Multi.new()) do + Ecto.Multi.run(multi, @txid_name, fn repo, _ -> + case repo.__adapter__() do + Ecto.Adapters.Postgres -> + txid(repo) + + adapter -> + Logger.warning("Unsupported adapter #{adapter}. txid will be nil") + {:ok, nil} + end + end) + end + + defp has_txid_step?(multi) do + multi + |> Ecto.Multi.to_list() + |> Enum.any?(fn {name, _} -> name == @txid_name end) + end + + defp append_multi(multi, %Operation{} = op, %__MODULE__{} = writer) do + with {:ok, actions} <- mutation_actions(op, writer), + {:ok, action} <- Map.fetch(actions, op.operation) do + ctx = %Context{ + index: op.index, + writer: writer, + operation: op, + schema: action.schema, + action: action + } + + multi + |> apply_before_all(action) + |> mutation_changeset(op, ctx, action) + |> validate_pks(op, ctx, action) + |> apply_before(op, ctx, action) + |> apply_changeset(op, ctx, action) + |> apply_after(op, ctx, action) + else + {:error, reason} -> + Ecto.Multi.error(multi, {:error, op.index}, %Error{message: reason, operation: op}) + end + end + + defp apply_before_all(multi, %{before_all: nil} = _action) do + multi + end + + defp apply_before_all(multi, %{before_all: before_all_fun} = action) + when is_function(before_all_fun, 1) do + key = {action.schema, :before_all} + + Ecto.Multi.merge(multi, fn + %{^key => true} = _changes -> + Ecto.Multi.new() + + _changes -> + before_all_fun.(Ecto.Multi.new()) + |> Ecto.Multi.put({action.schema, :before_all}, true) + end) + end + + defp mutation_changeset(multi, %Operation{} = op, %Context{} = ctx, action) do + %{schema: schema, validate: changeset_fun} = action + %{index: idx, operation: operation, data: lookup_data, changes: change_data} = op + + Ecto.Multi.run(multi, {:__phoenix_sync__, :changeset, idx}, fn repo, changes -> + ctx = %{ctx | changes: changes} + + case action.load.(repo, operation, lookup_data, ctx) do + struct when is_struct(struct, schema) -> + apply_changeset_fun(changeset_fun, struct, op, change_data, action) + + struct when is_struct(struct) -> + {:error, + %Error{ + message: + "load function returned an inconsistent value. Expected %#{schema}{}, got %#{struct.__struct__}{}", + operation: op + }} + + {:ok, struct} when is_struct(struct, schema) -> + apply_changeset_fun(changeset_fun, struct, op, change_data, action) + + {:ok, struct} when is_struct(struct) -> + {:error, + %Error{ + message: + "load function returned an inconsistent value. Expected %#{schema}{}, got %#{struct.__struct__}{}", + operation: op + }} + + {:error, reason} -> + {:error, %Error{message: reason, operation: op}} + + nil -> + pks = Map.new(action.pks, fn col -> {col, Map.fetch!(lookup_data, to_string(col))} end) + + {:error, + %Error{message: "No original record found for row #{inspect(pks)}", operation: op}} + + invalid -> + {:error, + %Error{ + message: "Invalid return value from load(), got: #{inspect(invalid)}", + operation: op + }} + end + end) + end + + defp apply_changeset_fun(changeset_fun, data, op, change_data, action) do + case changeset_fun do + fun3 when is_function(fun3, 3) -> + {:ok, fun3.(data, change_data, op.operation)} + + fun2 when is_function(fun2, 2) -> + {:ok, fun2.(data, change_data)} + + # delete changeset/validation functions can just look at the original + fun1 when is_function(fun1, 1) -> + {:ok, fun1.(data)} + + _ -> + {:error, "Invalid changeset_fun for #{inspect(action.table)} #{inspect(op)}"} + end + end + + # inserts don't need the pk fields + defp validate_pks(multi, %Operation{operation: :insert}, _ctx, _action) do + multi + end + + defp validate_pks(multi, %Operation{index: idx, data: lookup}, _ctx, action) do + do_validate_pks(multi, action.pks, lookup, idx) + end + + defp do_validate_pks(multi, pks, data, n) do + case Enum.reject(pks, &Map.has_key?(data, to_string(&1))) do + [] -> + multi + + keys -> + Ecto.Multi.error( + multi, + {:error, n}, + {"Operation data is missing required primary keys: #{inspect(keys)}", data} + ) + end + end + + defp apply_before(multi, operation, ctx, %{pre_apply: pre_apply_fun} = action) do + apply_hook(multi, operation, ctx, {:pre_apply, pre_apply_fun}, action) + end + + defp apply_after(multi, operation, ctx, %{post_apply: post_apply_fun} = action) do + apply_hook(multi, operation, ctx, {:post_apply, post_apply_fun}, action) + end + + defp apply_hook(multi, _operation, _ctx, {_, nil}, _action) do + multi + end + + defp apply_hook(multi, operation, ctx, {hook_name, hook_fun}, action) do + Ecto.Multi.merge(multi, fn changes -> + changeset = changeset!(changes, operation) + + ctx = %{ctx | changes: changes, callback: hook_name} + + case hook_fun do + fun3 when is_function(fun3, 3) -> + fun3.(Ecto.Multi.new(), changeset, ctx) + + _ -> + raise "Invalid #{hook_name} for #{inspect(action.table)} #{inspect(operation.operation)}" + end + |> validate_callback!(operation.operation, action) + end) + end + + defp validate_callback!(%Ecto.Multi{} = multi, _op, _action), do: multi + + defp validate_callback!(value, op, action), + do: + raise(ArgumentError, + message: + "Invalid return type #{inspect(value)} for #{op} into #{inspect(action.table)}. Expected %Ecto.Multi{}" + ) + + defp changeset!(changes, %Operation{index: n}) do + changeset!(changes, n) + end + + defp changeset!(changes, n) when is_integer(n) do + Map.fetch!(changes, {:__phoenix_sync__, :changeset, n}) + end + + defp apply_changeset(multi, %Operation{operation: :insert} = op, ctx, action) do + Ecto.Multi.merge(multi, fn changes -> + Ecto.Multi.insert(Ecto.Multi.new(), operation_name(ctx), changeset!(changes, op)) + end) + |> append_changeset_result(ctx, action) + end + + defp apply_changeset(multi, %Operation{operation: :update} = op, ctx, action) do + Ecto.Multi.merge(multi, fn changes -> + Ecto.Multi.update(Ecto.Multi.new(), operation_name(ctx), changeset!(changes, op)) + end) + |> append_changeset_result(ctx, action) + end + + defp apply_changeset(multi, %Operation{operation: :delete} = op, ctx, action) do + Ecto.Multi.merge(multi, fn changes -> + Ecto.Multi.delete(Ecto.Multi.new(), operation_name(ctx), changeset!(changes, op)) + end) + |> append_changeset_result(ctx, action) + end + + # Put the result of each op into the multi under a deterministic key based on + # the primary key. + # This allows for quick lookup of data that we've already accessed + defp append_changeset_result(multi, ctx, action) do + name = operation_name(ctx) + + Ecto.Multi.merge(multi, fn %{^name => result} -> + pk = Map.new(action.pks, &{to_string(&1), Map.fetch!(result, &1) |> to_string()}) + + Ecto.Multi.put(Ecto.Multi.new(), load_key(ctx, pk), result) + end) + end + + defp mutation_actions(%Operation{relation: [prefix, name] = relation} = operation, write) + when is_binary(name) and is_binary(prefix) do + case write.mappings do + %{^relation => actions} -> + {:ok, actions} + + %{^name => actions} -> + {:ok, actions} + + _ -> + {:error, + %Error{ + message: "No configuration for writes to table #{inspect(name)}", + operation: operation + }} + end + end + + defp mutation_actions(%Operation{relation: name} = operation, write) when is_binary(name) do + case write.mappings do + %{^name => actions} -> + {:ok, actions} + + mappings -> + case Enum.filter(Map.keys(mappings), &match?([_, ^name], &1)) do + [] -> + {:error, + %Error{ + message: "No configuration for writes to table #{inspect(name)}", + operation: operation + }} + + [key] -> + {:ok, Map.fetch!(write.mappings, key)} + + [_ | _] = keys -> + {:error, + %Error{ + message: + "Multiple matches for relation #{inspect(name)}: #{inspect(keys)}. Please pass full `[\"schema\", \"name\"]` relation in mutation data", + operation: operation + }} + end + end + end + + @doc """ + Runs the mutation inside a transaction. + + Since the mutation operation is expressed as an `Ecto.Multi` operation, see + the [`Ecto.Repo` docs](https://hexdocs.pm/ecto/Ecto.Repo.html#c:transaction/2-use-with-ecto-multi) + for the result if any of your mutations returns an error. + + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.allow(MyApp.Todos.Todo) + |> #{inspect(__MODULE__)}.allow(MyApp.Options.Option) + |> #{inspect(__MODULE__)}.ingest( + changes, + format: #{inspect(__MODULE__)}.Format.TanstackOptimistic + ) + |> #{inspect(__MODULE__)}.transaction(MyApp.Repo) + |> case do + {:ok, txid, _changes} -> + # return the txid to the client + Plug.Conn.send_resp(conn, 200, Jason.encode!(%{txid: txid})) + {:error, _failed_operation, failed_value, _changes_so_far} -> + # extract the error message from the changeset returned as `failed_value` + error = + Ecto.Changeset.traverse_errors(failed_value, fn {msg, opts} -> + Regex.replace(~r"%{(\w+)}", msg, fn _, key -> + opts |> Keyword.get(String.to_existing_atom(key), key) |> to_string() + end) + end) + Plug.Conn.send_resp(conn, 400, Jason.encode!(error)) + end + + Also supports normal fun/0 or fun/1 style transactions much like + `c:Ecto.Repo.transaction/2`, returning the txid of the operation: + + {:ok, txid, todo} = + #{inspect(__MODULE__)}.transaction(fn -> + Repo.insert!(changeset) + end, Repo) + """ + @spec transaction(t() | Ecto.Multi.t(), Ecto.Repo.t(), keyword()) :: + {:ok, txid(), Ecto.Multi.changes()} | Ecto.Multi.failure() + def transaction(writer_or_multi, repo, opts \\ []) + + def transaction(%__MODULE__{ingest: []}, _repo, _opts) do + {:error, "no changes ingested"} + end + + def transaction(%__MODULE__{} = writer, repo, opts) do + writer + |> to_multi() + |> transaction(repo, opts) + end + + def transaction(%Ecto.Multi{} = multi, repo, opts) when is_atom(repo) do + wrapped_multi = + if has_txid_step?(multi) do + multi + else + Ecto.Multi.prepend(multi, txid_step()) + end + + with {:ok, changes} <- repo.transaction(wrapped_multi, opts) do + {txid, changes} = Map.pop!(changes, @txid_name) + {:ok, txid, changes} + end + end + + @doc """ + Apply operations from a mutation directly via a transaction. + + `operation_fun` is a 1-arity function that receives each of the + `%#{inspect(__MODULE__.Operation)}{}` structs within the mutation data and + should apply them appropriately. It should return `:ok` or `{:ok, result}` if + successful or `{:error, reason}` if the operation is invalid or failed to + apply. If any operation returns `{:error, _}` or raises then the entire + transaction is aborted. + + This function will return `{:error, reason}` if the transaction data fails to parse. + + {:ok, txid} = + #{inspect(__MODULE__)}.transact( + my_encoded_txn, + MyApp.Repo, + fn + %{operation: :insert, relation: [_, "todos"], change: change} -> + MyApp.Repo.insert(...) + %{operation: :update, relation: [_, "todos"], data: data, change: change} -> + MyApp.Repo.update(Ecto.Changeset.cast(...)) + %{operation: :delete, relation: [_, "todos"], data: data} -> + # we don't allow deletes... + {:error, "invalid delete"} + end, + format: #{inspect(__MODULE__.Format.TanstackOptimistic)}, + timeout: 60_000 + ) + + Any of the `opts` not used by this module are passed onto the + `c:Ecto.Repo.transaction/2` call. + + This is equivalent to the below: + + {:ok, txn} = + #{inspect(__MODULE__)}.parse_transaction( + my_encoded_txn, + format: #{inspect(__MODULE__.Format.TanstackOptimistic)} + ) + + {:ok, txid} = + MyApp.Repo.transaction(fn -> + Enum.each(txn.operations, fn + %{operation: :insert, relation: [_, "todos"], change: change} -> + # insert a Todo + %{operation: :update, relation: [_, "todos"], data: data, change: change} -> + # update a Todo + %{operation: :delete, relation: [_, "todos"], data: data} -> + # we don't allow deletes... + raise "invalid delete" + end) + #{inspect(__MODULE__)}.txid!(MyApp.Repo) + end, timeout: 60_000) + """ + @spec transact( + Format.transaction_data(), + Ecto.Repo.t(), + operation_fun :: (Operation.t() -> :ok | {:ok, any()} | {:error, any()}), + transact_opts() + ) :: {:ok, txid()} | {:error, any()} + def transact(changes, repo, operation_fun, opts) + when is_function(operation_fun, 1) and is_atom(repo) do + {parse_opts, txn_opts} = split_writer_txn_opts(opts) + + with {:ok, %Transaction{} = txn} <- parse_transaction(changes, parse_opts) do + repo.transaction( + fn -> + Enum.reduce_while(txn.operations, :ok, fn op, :ok -> + case operation_fun.(op) do + {:ok, _result} -> + {:cont, :ok} + + :ok -> + {:cont, :ok} + + {:error, _reason} = error -> + {:halt, error} + + other -> + raise ArgumentError, + "expected to return :ok, {:ok, _} or {:error, _}, got: #{inspect(other)}" + end + end) + |> case do + {:error, reason} -> + repo.rollback(reason) + + :ok -> + txid!(repo) + end + end, + txn_opts + ) + end + end + + @doc """ + Extract the transaction id from changes or from a `Ecto.Repo` within a + transaction. + + This allows you to use a standard `c:Ecto.Repo.transaction/2` call to apply + mutations defined using `apply/2` and extract the transaction id afterwards. + + Example + + {:ok, changes} = + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.allow(MyApp.Todos.Todo) + |> #{inspect(__MODULE__)}.allow(MyApp.Options.Option) + |> #{inspect(__MODULE__)}.to_multi(changes, format: #{inspect(__MODULE__)}.Format.TanstackOptimistic) + |> MyApp.Repo.transaction() + + {:ok, txid} = #{inspect(__MODULE__)}.txid(changes) + + It also allows you to get a transaction id from any active transaction: + + MyApp.Repo.transaction(fn -> + {:ok, txid} = #{inspect(__MODULE__)}.txid(MyApp.Repo) + end) + + Attempting to run `txid/1` on a repo outside a transaction will return an + error. + """ + @spec txid(Ecto.Multi.changes()) :: {:ok, txid()} | :error + def txid(%{@txid_name => txid} = _changes), do: {:ok, txid} + def txid(changes) when is_map(changes), do: :error + + def txid(repo) when is_atom(repo) do + if repo.in_transaction?() do + with {:ok, %{rows: [[txid]]}} = repo.query(@txid_query) do + {:ok, txid} + end + else + {:error, %Error{message: "not in a transaction"}} + end + end + + @doc """ + Returns the a transaction id or raises on an error. + + See `txid/1`. + """ + @spec txid!(Ecto.Multi.changes()) :: txid() + def txid!(%{@txid_name => txid} = _changes), do: txid + def txid!(%{}), do: raise(ArgumentError, message: "No txid in change data") + + def txid!(repo) when is_atom(repo) do + case txid(repo) do + {:ok, txid} -> txid + {:error, reason} -> raise reason + end + end + + @doc """ + Return a unique operation name for use in `pre_apply` or `post_apply` callbacks. + + `Ecto.Multi` requires that all operation names be unique within a + transaction. This function gives you a simple way to generate a name for your + own operations that is guarateed not to conflict with any other. + + Example: + + #{inspect(__MODULE__)}.new() + |> #{inspect(__MODULE__)}.allow( + MyModel, + pre_apply: fn multi, changeset, context -> + name = #{inspect(__MODULE__)}.operation_name(context) + Ecto.Multi.insert(multi, name, AuditEvent.for_changeset(changeset)) + end + ) + """ + def operation_name(%Context{} = ctx) do + {ctx.schema, ctx.operation.operation, ctx.index} + end + + @doc """ + Like `operation_name/1` but allows for a custom label. + """ + @spec operation_name(context(), term()) :: term() + def operation_name(%Context{} = ctx, label) do + {operation_name(ctx), label} + end + + defp load_key(ctx, pk) do + opkey(schema: ctx.schema, operation: ctx.operation.operation, index: ctx.index, pk: pk) + end +end diff --git a/lib/phoenix/sync/writer/format.ex b/lib/phoenix/sync/writer/format.ex new file mode 100644 index 0000000..13844e3 --- /dev/null +++ b/lib/phoenix/sync/writer/format.ex @@ -0,0 +1,99 @@ +defmodule Phoenix.Sync.Writer.Format do + @moduledoc """ + Defines a behaviour that applications can implement in order to handle custom + data formats within a `Phoenix.Sync.Writer` + + The exact format of the change messages coming from the client is + unimportant, however `Phoenix.Sync.Writer` requires certain essential + information that needs to be included. + + - `operation` - one of `insert`, `update` or `delete` + - `relation` - the table name that the operation applies to. + - `data` - the original data before the mutation was applied. **Required** for + `update` or `delete` operations + - `changes` - the changes to apply. **Required** for `insert` and `update` operations + + However you choose to encode this information on the client, you simply need + to set the `format` of your write configuration using to a module that + implements this behaviour. + + #### Implementation guide + + Once you have parsed the data coming from the client, over HTTP or even raw + TCP, use `Phoenix.Sync.Writer.Operation.new/4` (or + `Phoenix.Sync.Writer.Transaction.operation/4`) to validate the values and + create a `%Phoenix.Sync.Writer.Operation{}` struct. + + `Phoenix.Sync.Writer.Transaction.parse_operations/2` is a helper function for + error handling when mapping a list of encoded operation data using + `Phoenix.Sync.Writer.Transaction.operation/4`. + + ### Example: + + We use Protocol buffers to encode the incoming information using `:protox` + and implement this module's `c:parse_transaction/1` callback using the `Protox` decode functions: + + defmodule MyApp.Protocol do + use Protox, + namespace: __MODULE__, + schema: ~S[ + syntax: "proto2"; + message Operation { + enum Op { + INSERT = 0; + UPDATE = 1; + DELETE = 2; + } + required Op op = 1; + string table = 2; + map original = 3; + map modified = 4; + } + message Transaction { + repeated Operation operations = 1; + } + ] + + alias Phoenix.Sync.Writer + + @behaviour #{inspect(__MODULE__)} + + @impl #{inspect(__MODULE__)} + def parse_transaction(proto) when is_binary(proto) do + with {:ok, %{operations: ops}} <- MyApp.Protocol.Transaction.decode(proto), + {:ok, operations} <- Writer.Transaction.parse_operations(ops, &convert_operation/1) do + {:ok, %Writer.Transaction{operations: operations} + end + end + + defp convert_operation(%MyApp.Protocol.Operation{} = operation) do + Writer.Transaction.operation( + operation.op, + operation.table, + operation.original, + operation.modified + ) + end + end + + We use our protocol module as the `format` when we `apply/4` our transaction + data and we can just pass our serialized protobuf message as the mutation data: + + {:ok, txid, _changes} = + Phoenix.Sync.Writer.new() + |> Phoenix.Sync.Writer.apply(protobuf_data, MyApp.Repo, format: MyApp.Protocol) + """ + alias Phoenix.Sync.Writer.Transaction + + @type t() :: module() + @typedoc "Raw data from a client that will be parsed to a `#{inspect(__MODULE__.Transaction)}` by the Writer's format parser" + @type transaction_data() :: term() + @type parse_transaction_result() :: {:ok, Transaction.t()} | {:error, term()} + @type parser_fun() :: (transaction_data() -> parse_transaction_result()) | mfa() + + @doc """ + Translate some data format into a `Phoenix.Sync.Writer.Transaction` with a + list of operations to apply. + """ + @callback parse_transaction(term()) :: parse_transaction_result() +end diff --git a/lib/phoenix/sync/writer/format/tanstack_optimistic.ex b/lib/phoenix/sync/writer/format/tanstack_optimistic.ex new file mode 100644 index 0000000..b857f80 --- /dev/null +++ b/lib/phoenix/sync/writer/format/tanstack_optimistic.ex @@ -0,0 +1,41 @@ +defmodule Phoenix.Sync.Writer.Format.TanstackOptimistic do + @moduledoc """ + Implements the `Phoenix.Sync.Writer.Format` behaviour for the data format used by + [TanStack/optimistic](https://github.com/TanStack/optimistic). + """ + + alias Phoenix.Sync.Writer.Transaction + + @behaviour Phoenix.Sync.Writer.Format + + @impl Phoenix.Sync.Writer.Format + def parse_transaction([]), do: {:error, "empty transaction"} + + def parse_transaction(json) when is_binary(json) do + with {:ok, operations} <- Jason.decode(json) do + parse_transaction(operations) + end + end + + def parse_transaction(operations) when is_list(operations) do + with {:ok, operations} <- Transaction.parse_operations(operations, &parse_operation/1) do + Transaction.new(operations) + end + end + + def parse_transaction(operations) do + {:error, "invalid operation list #{inspect(operations)}"} + end + + def parse_operation(%{"type" => type} = m) do + {data, changes} = + case type do + # for inserts we don't use the original data, just the changes + "insert" -> {%{}, m["modified"]} + "update" -> {m["original"], m["changes"]} + "delete" -> {m["original"], %{}} + end + + Transaction.operation(type, get_in(m, ["syncMetadata", "relation"]), data, changes) + end +end diff --git a/lib/phoenix/sync/writer/operation.ex b/lib/phoenix/sync/writer/operation.ex new file mode 100644 index 0000000..331ddf1 --- /dev/null +++ b/lib/phoenix/sync/writer/operation.ex @@ -0,0 +1,116 @@ +defmodule Phoenix.Sync.Writer.Operation do + @moduledoc """ + Represents a mutation operation received from a client. + + To handle custom formats, translate incoming changes into a [`Operation`](`#{inspect(__MODULE__)}`) + struct using `new/4`. + + """ + + defstruct [:index, :operation, :relation, :data, :changes] + + order = %{insert: 0, update: 1, delete: 2} + + sort_mapper = fn {op, _} -> order[op] end + + equivalent_ops = fn op -> + upper = op |> to_string() |> String.upcase() + [op, to_string(op), upper, String.to_atom(upper)] + end + + @accepted_operations Map.new([:insert, :update, :delete], &{&1, equivalent_ops.(&1)}) + + allowed_values = + @accepted_operations + |> Enum.sort_by(sort_mapper) + |> Enum.flat_map(&Enum.sort(elem(&1, 1), :desc)) + |> Enum.map(&"`#{inspect(&1)}`") + |> Enum.join(",") + + @type t() :: %__MODULE__{ + operation: :insert | :update | :delete, + relation: binary() | [binary(), ...], + data: map(), + changes: map() + } + + @type new_result() :: {:ok, t()} | {:error, String.t()} + @doc """ + Takes data from a mutation and validates it before returning a struct. + + ## Parameters + + - `operation` one of #{allowed_values} + - `table` the client table name for the write. Can either be a plain string + name `"table"` or a list with `["schema", "table"]`. + - `data` the original values (see [Updates vs Inserts vs Deletes](#new/4-updates-vs-inserts-vs-deletes)) + - `changes` any updates to apply (see [Updates vs Inserts vs Deletes](#new/4-updates-vs-inserts-vs-deletes)) + + ## Updates vs Inserts vs Deletes + + The `#{inspect(__MODULE__)}` struct has two value fields, `data` and `changes`. + + `data` represents what's already in the database, and `changes` what's + going to be written over the top of this. + + For `insert` operations, `data` is ignored so the new values for the + inserted row should be in `changes`. + + For `deletes`, `changes` is ignored and `data` should contain the row + specification to delete. This needn't be the full row, but must contain + values for all the **primary keys** for the table. + + For `updates`, `data` should contain the original row values and `changes` + the changed fields. + + These fields map to the arguments `Ecto.Changeset.change/2` and + `Ecto.Changeset.cast/4` functions, `data` is used to populate the first + argument of these functions and `changes` the second. + """ + @spec new(binary() | atom(), binary() | [binary(), ...], map() | nil, map() | nil) :: + new_result() + def new(operation, table, data, changes) do + with {:ok, operation} <- validate_operation(operation), + {:ok, relation} <- validate_table(table), + {:ok, data} <- validate_data(data, operation), + {:ok, changes} <- validate_changes(changes, operation) do + {:ok, %__MODULE__{operation: operation, relation: relation, data: data, changes: changes}} + end + end + + @spec new!(binary() | atom(), binary() | [binary(), ...], map() | nil, map() | nil) :: t() + def new!(operation, table, data, changes) do + case new(operation, table, data, changes) do + {:ok, operation} -> operation + {:error, reason} -> raise ArgumentError, message: reason + end + end + + for {operation, equivalents} <- @accepted_operations, valid <- equivalents do + defp validate_operation(unquote(valid)), do: {:ok, unquote(operation)} + end + + defp validate_operation(op), + do: + {:error, + "Invalid operation #{inspect(op)} expected one of #{inspect(~w(insert update delete))}"} + + defp validate_table(name) when is_binary(name), do: {:ok, name} + + defp validate_table([prefix, name] = relation) when is_binary(name) and is_binary(prefix), + do: {:ok, relation} + + defp validate_table(table), do: {:error, "Invalid table: #{inspect(table)}"} + + defp validate_data(_, :insert), do: {:ok, %{}} + defp validate_data(%{} = data, _), do: {:ok, data} + + defp validate_data(data, _op), + do: {:error, "Invalid data expected map got #{inspect(data)}"} + + defp validate_changes(_, :delete), do: {:ok, %{}} + defp validate_changes(%{} = changes, _insert_or_update), do: {:ok, changes} + + defp validate_changes(changes, _insert_or_update), + do: {:error, "Invalid changes for update. Expected map got #{inspect(changes)}"} +end diff --git a/lib/phoenix/sync/writer/transaction.ex b/lib/phoenix/sync/writer/transaction.ex new file mode 100644 index 0000000..40abfb3 --- /dev/null +++ b/lib/phoenix/sync/writer/transaction.ex @@ -0,0 +1,74 @@ +defmodule Phoenix.Sync.Writer.Transaction do + @moduledoc """ + Represents a transaction containing a list of `Phoenix.Sync.Writer.Operation`s + that should be applied atomically. + + ```elixir + {:ok, operations} <- Transaction.parse_operations(operations, &parse_operation/1) + + %Transaction{} = Transaction.new(operations) + ``` + """ + + defstruct txid: nil, operations: [] + + alias Phoenix.Sync.Writer.Operation + + @type id() :: integer() + @type t() :: %__MODULE__{ + txid: nil | id(), + operations: [Operation.t(), ...] + } + + @doc """ + Return a new, empty, `Transaction` struct. + """ + @spec empty() :: t() + def empty, do: %__MODULE__{} + + @spec new([Operation.t()]) :: {:ok, t()} | {:error, term()} + def new([]) do + {:error, "empty transaction"} + end + + def new(operations) when is_list(operations) do + if Enum.all?(operations, &match?(%Operation{}, &1)) do + operations = + operations + |> Enum.with_index() + |> Enum.map(fn {operation, idx} -> %{operation | index: idx} end) + + {:ok, %__MODULE__{operations: operations}} + else + {:error, "Invalid operations list"} + end + end + + defdelegate operation(operation, table, data, changes), to: Operation, as: :new + defdelegate operation!(operation, table, data, changes), to: Operation, as: :new! + + @doc """ + Helper function to parse a list of encoded Operations. + """ + @spec parse_operations([term()], (term() -> {:ok, Operation.t()} | {:error, term()})) :: + {:ok, [Operation.t()]} | {:error, term()} + def parse_operations(raw_operations, parse_function) when is_function(parse_function, 1) do + with operations when is_list(operations) <- + do_parse_operations(raw_operations, parse_function) do + {:ok, Enum.reverse(operations)} + end + end + + defp do_parse_operations(raw_operations, parse_function) do + raw_operations + |> Enum.reduce_while([], fn raw_op, operations -> + case parse_function.(raw_op) do + {:ok, operation} -> + {:cont, [operation | operations]} + + {:error, _reason} = error -> + {:halt, error} + end + end) + end +end diff --git a/mix.exs b/mix.exs index 9d61b6c..ec9f740 100644 --- a/mix.exs +++ b/mix.exs @@ -54,7 +54,8 @@ defmodule Phoenix.Sync.MixProject do defp deps_for_env(:dev) do [ - {:ex_doc, ">= 0.0.0", only: :dev, runtime: false} + {:ex_doc, ">= 0.0.0", only: :dev, runtime: false}, + {:makeup_ts, ">= 0.0.0", only: :dev, runtime: false} ] end @@ -65,10 +66,24 @@ defmodule Phoenix.Sync.MixProject do defp docs do [ main: "readme", - extras: ["README.md", "LICENSE"] + extras: ["README.md", "LICENSE"], + before_closing_head_tag: docs_before_closing_head_tag() ] end + defp docs_live? do + System.get_env("MIX_DOCS_LIVE", "false") == "true" + end + + defp docs_before_closing_head_tag do + if docs_live?(), + do: fn + :html -> ~s[] + _ -> "" + end, + else: fn _ -> "" end + end + defp package do [ links: %{ diff --git a/mix.lock b/mix.lock index e56794a..583fadc 100644 --- a/mix.lock +++ b/mix.lock @@ -10,7 +10,7 @@ "decimal": {:hex, :decimal, "2.3.0", "3ad6255aa77b4a3c4f818171b12d237500e63525c2fd056699967a3e7ea20f62", [:mix], [], "hexpm", "a4d66355cb29cb47c3cf30e71329e58361cfcb37c34235ef3bf1d7bf3773aeac"}, "dialyxir": {:hex, :dialyxir, "1.4.5", "ca1571ac18e0f88d4ab245f0b60fa31ff1b12cbae2b11bd25d207f865e8ae78a", [:mix], [{:erlex, ">= 0.2.7", [hex: :erlex, repo: "hexpm", optional: false]}], "hexpm", "b0fb08bb8107c750db5c0b324fa2df5ceaa0f9307690ee3c1f6ba5b9eb5d35c3"}, "dotenvy": {:hex, :dotenvy, "0.9.0", "aad823209cd7c13babe2dc310d9e54ce0203674cbd7631b0ced2a771e3a49532", [:mix], [], "hexpm", "ab959208a9ad02ff26ce1c5d4911668925c12a6cf58287ef77ae63161909c73b"}, - "earmark_parser": {:hex, :earmark_parser, "1.4.43", "34b2f401fe473080e39ff2b90feb8ddfeef7639f8ee0bbf71bb41911831d77c5", [:mix], [], "hexpm", "970a3cd19503f5e8e527a190662be2cee5d98eed1ff72ed9b3d1a3d466692de8"}, + "earmark_parser": {:hex, :earmark_parser, "1.4.44", "f20830dd6b5c77afe2b063777ddbbff09f9759396500cdbe7523efd58d7a339c", [:mix], [], "hexpm", "4778ac752b4701a5599215f7030989c989ffdc4f6df457c5f36938cc2d2a2750"}, "ecto": {:hex, :ecto, "3.12.5", "4a312960ce612e17337e7cefcf9be45b95a3be6b36b6f94dfb3d8c361d631866", [:mix], [{:decimal, "~> 2.0", [hex: :decimal, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "6eb18e80bef8bb57e17f5a7f068a1719fbda384d40fc37acb8eb8aeca493b6ea"}, "ecto_sql": {:hex, :ecto_sql, "3.12.1", "c0d0d60e85d9ff4631f12bafa454bc392ce8b9ec83531a412c12a0d415a3a4d0", [:mix], [{:db_connection, "~> 2.4.1 or ~> 2.5", [hex: :db_connection, repo: "hexpm", optional: false]}, {:ecto, "~> 3.12", [hex: :ecto, repo: "hexpm", optional: false]}, {:myxql, "~> 0.7", [hex: :myxql, repo: "hexpm", optional: true]}, {:postgrex, "~> 0.19 or ~> 1.0", [hex: :postgrex, repo: "hexpm", optional: true]}, {:tds, "~> 2.1.1 or ~> 2.2", [hex: :tds, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4.0 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "aff5b958a899762c5f09028c847569f7dfb9cc9d63bdb8133bff8a5546de6bf5"}, "electric": {:hex, :electric, "1.0.1", "23b4e35af67316c2b44d145a545068bc63f8aac8de669d0c9c06c6349a232ae7", [:mix], [{:backoff, "~> 1.1", [hex: :backoff, repo: "hexpm", optional: false]}, {:bandit, "~> 1.5", [hex: :bandit, repo: "hexpm", optional: false]}, {:dotenvy, "~> 0.8", [hex: :dotenvy, repo: "hexpm", optional: false]}, {:ecto, "~> 3.11", [hex: :ecto, repo: "hexpm", optional: false]}, {:electric_cubdb, "~> 2.0", [hex: :electric_cubdb, repo: "hexpm", optional: false]}, {:gen_stage, "~> 1.2", [hex: :gen_stage, repo: "hexpm", optional: false]}, {:jason, "~> 1.4", [hex: :jason, repo: "hexpm", optional: false]}, {:nimble_options, "~> 1.1", [hex: :nimble_options, repo: "hexpm", optional: false]}, {:opentelemetry, "~> 1.5", [hex: :opentelemetry, repo: "hexpm", optional: true]}, {:opentelemetry_exporter, "~> 1.8", [hex: :opentelemetry_exporter, repo: "hexpm", optional: true]}, {:opentelemetry_semantic_conventions, "~> 1.27", [hex: :opentelemetry_semantic_conventions, repo: "hexpm", optional: false]}, {:opentelemetry_telemetry, "~> 1.1", [hex: :opentelemetry_telemetry, repo: "hexpm", optional: false]}, {:otel_metric_exporter, "~> 0.3", [hex: :otel_metric_exporter, repo: "hexpm", optional: true]}, {:pg_query_ex, "0.6.0", [hex: :pg_query_ex, repo: "hexpm", optional: false]}, {:plug, "~> 1.16", [hex: :plug, repo: "hexpm", optional: false]}, {:postgrex, "~> 0.19", [hex: :postgrex, repo: "hexpm", optional: false]}, {:remote_ip, "~> 1.2", [hex: :remote_ip, repo: "hexpm", optional: false]}, {:req, "~> 0.5", [hex: :req, repo: "hexpm", optional: false]}, {:retry, "~> 0.18", [hex: :retry, repo: "hexpm", optional: false]}, {:sentry, "~> 10.0", [hex: :sentry, repo: "hexpm", optional: true]}, {:telemetry_metrics_prometheus_core, "~> 1.1", [hex: :telemetry_metrics_prometheus_core, repo: "hexpm", optional: true]}, {:telemetry_metrics_statsd, "~> 0.7", [hex: :telemetry_metrics_statsd, repo: "hexpm", optional: true]}, {:telemetry_poller, "~> 1.1", [hex: :telemetry_poller, repo: "hexpm", optional: false]}, {:tls_certificate_check, "~> 1.23", [hex: :tls_certificate_check, repo: "hexpm", optional: false]}, {:tz, "~> 0.27", [hex: :tz, repo: "hexpm", optional: false]}], "hexpm", "1044225c00dd7a6776d9b6323870f44e61e9cc7689abe28094d9b237290d831a"}, @@ -18,7 +18,7 @@ "electric_cubdb": {:hex, :electric_cubdb, "2.0.2", "36f86e3c52dc26f4e077a49fbef813b1a38d3897421cece851f149190b34c16c", [:mix], [], "hexpm", "0c0e24b31fb76ad1b33c5de2ab35c41a4ff9da153f5c1f9b15e2de78575acaf2"}, "elixir_make": {:hex, :elixir_make, "0.9.0", "6484b3cd8c0cee58f09f05ecaf1a140a8c97670671a6a0e7ab4dc326c3109726", [:mix], [], "hexpm", "db23d4fd8b757462ad02f8aa73431a426fe6671c80b200d9710caf3d1dd0ffdb"}, "erlex": {:hex, :erlex, "0.2.7", "810e8725f96ab74d17aac676e748627a07bc87eb950d2b83acd29dc047a30595", [:mix], [], "hexpm", "3ed95f79d1a844c3f6bf0cea61e0d5612a42ce56da9c03f01df538685365efb0"}, - "ex_doc": {:hex, :ex_doc, "0.37.1", "65ca30d242082b95aa852b3b73c9d9914279fff56db5dc7b3859be5504417980", [:mix], [{:earmark_parser, "~> 1.4.42", [hex: :earmark_parser, repo: "hexpm", optional: false]}, {:makeup_c, ">= 0.1.0", [hex: :makeup_c, repo: "hexpm", optional: true]}, {:makeup_elixir, "~> 0.14 or ~> 1.0", [hex: :makeup_elixir, repo: "hexpm", optional: false]}, {:makeup_erlang, "~> 0.1 or ~> 1.0", [hex: :makeup_erlang, repo: "hexpm", optional: false]}, {:makeup_html, ">= 0.1.0", [hex: :makeup_html, repo: "hexpm", optional: true]}], "hexpm", "6774f75477733ea88ce861476db031f9399c110640752ca2b400dbbb50491224"}, + "ex_doc": {:hex, :ex_doc, "0.37.3", "f7816881a443cd77872b7d6118e8a55f547f49903aef8747dbcb345a75b462f9", [:mix], [{:earmark_parser, "~> 1.4.42", [hex: :earmark_parser, repo: "hexpm", optional: false]}, {:makeup_c, ">= 0.1.0", [hex: :makeup_c, repo: "hexpm", optional: true]}, {:makeup_elixir, "~> 0.14 or ~> 1.0", [hex: :makeup_elixir, repo: "hexpm", optional: false]}, {:makeup_erlang, "~> 0.1 or ~> 1.0", [hex: :makeup_erlang, repo: "hexpm", optional: false]}, {:makeup_html, ">= 0.1.0", [hex: :makeup_html, repo: "hexpm", optional: true]}], "hexpm", "e6aebca7156e7c29b5da4daa17f6361205b2ae5f26e5c7d8ca0d3f7e18972233"}, "finch": {:hex, :finch, "0.19.0", "c644641491ea854fc5c1bbaef36bfc764e3f08e7185e1f084e35e0672241b76d", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:mint, "~> 1.6.2 or ~> 1.7", [hex: :mint, repo: "hexpm", optional: false]}, {:nimble_options, "~> 0.4 or ~> 1.0", [hex: :nimble_options, repo: "hexpm", optional: false]}, {:nimble_pool, "~> 1.1", [hex: :nimble_pool, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "fc5324ce209125d1e2fa0fcd2634601c52a787aff1cd33ee833664a5af4ea2b6"}, "floki": {:hex, :floki, "0.37.0", "b83e0280bbc6372f2a403b2848013650b16640cd2470aea6701f0632223d719e", [:mix], [], "hexpm", "516a0c15a69f78c47dc8e0b9b3724b29608aa6619379f91b1ffa47109b5d0dd3"}, "gen_stage": {:hex, :gen_stage, "1.2.1", "19d8b5e9a5996d813b8245338a28246307fd8b9c99d1237de199d21efc4c76a1", [:mix], [], "hexpm", "83e8be657fa05b992ffa6ac1e3af6d57aa50aace8f691fcf696ff02f8335b001"}, @@ -30,6 +30,7 @@ "makeup": {:hex, :makeup, "1.2.1", "e90ac1c65589ef354378def3ba19d401e739ee7ee06fb47f94c687016e3713d1", [:mix], [{:nimble_parsec, "~> 1.4", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "d36484867b0bae0fea568d10131197a4c2e47056a6fbe84922bf6ba71c8d17ce"}, "makeup_elixir": {:hex, :makeup_elixir, "1.0.1", "e928a4f984e795e41e3abd27bfc09f51db16ab8ba1aebdba2b3a575437efafc2", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}, {:nimble_parsec, "~> 1.2.3 or ~> 1.3", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "7284900d412a3e5cfd97fdaed4f5ed389b8f2b4cb49efc0eb3bd10e2febf9507"}, "makeup_erlang": {:hex, :makeup_erlang, "1.0.2", "03e1804074b3aa64d5fad7aa64601ed0fb395337b982d9bcf04029d68d51b6a7", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}], "hexpm", "af33ff7ef368d5893e4a267933e7744e46ce3cf1f61e2dccf53a111ed3aa3727"}, + "makeup_ts": {:hex, :makeup_ts, "0.2.2", "1e0189bf0c624ae2d00772ea49b5140b9b7c7435240ccafe103e127e43e81a4d", [:mix], [{:makeup, "~> 1.1", [hex: :makeup, repo: "hexpm", optional: false]}], "hexpm", "4ca472a975c56ce41ac1adb513691cdeaa71e1581822620e8acfec2cdc96154f"}, "mime": {:hex, :mime, "2.0.6", "8f18486773d9b15f95f4f4f1e39b710045fa1de891fada4516559967276e4dc2", [:mix], [], "hexpm", "c9945363a6b26d747389aac3643f8e0e09d30499a138ad64fe8fd1d13d9b153e"}, "mint": {:hex, :mint, "1.7.1", "113fdb2b2f3b59e47c7955971854641c61f378549d73e829e1768de90fc1abf1", [:mix], [{:castore, "~> 0.1.0 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: true]}, {:hpax, "~> 0.1.1 or ~> 0.2.0 or ~> 1.0", [hex: :hpax, repo: "hexpm", optional: false]}], "hexpm", "fceba0a4d0f24301ddee3024ae116df1c3f4bb7a563a731f45fdfeb9d39a231b"}, "mox": {:hex, :mox, "1.2.0", "a2cd96b4b80a3883e3100a221e8adc1b98e4c3a332a8fc434c39526babafd5b3", [:mix], [{:nimble_ownership, "~> 1.0", [hex: :nimble_ownership, repo: "hexpm", optional: false]}], "hexpm", "c7b92b3cc69ee24a7eeeaf944cd7be22013c52fcb580c1f33f50845ec821089a"}, diff --git a/scripts/docs.sh b/scripts/docs.sh new file mode 100755 index 0000000..4a6cac6 --- /dev/null +++ b/scripts/docs.sh @@ -0,0 +1 @@ +mix docs \ No newline at end of file diff --git a/scripts/watch_docs.sh b/scripts/watch_docs.sh new file mode 100755 index 0000000..b04ec12 --- /dev/null +++ b/scripts/watch_docs.sh @@ -0,0 +1,15 @@ +#!/usr/bin/env bash + +./scripts/docs.sh + +# MIX_DOCS_LIVE=true adds an automatic reload to the generated docs pages +if command -v fswatch >/dev/null 2>&1; then + fswatch -0 -e "deps" -e "doc" -e "_build" . | + xargs -0 -n 1 MIX_DOCS_LIVE=true ./scripts/docs.sh +elif command -v inotifywait >/dev/null 2>&1; then + inotifywait -m -e modify -e create -e delete -r lib/ README.md | + while read NEWFILE; do MIX_DOCS_LIVE=true ./scripts/docs.sh; done +else + echo "no filesystem watch available: install fswatch or inotifywait" + exit 1 +fi diff --git a/test/phoenix/sync/writer_test.exs b/test/phoenix/sync/writer_test.exs new file mode 100644 index 0000000..932139b --- /dev/null +++ b/test/phoenix/sync/writer_test.exs @@ -0,0 +1,1044 @@ +defmodule Phoenix.Sync.WriterTest do + use ExUnit.Case, async: true + + alias Phoenix.Sync.Writer + alias Ecto.Changeset + + alias Support.Repo + + import Ecto.Query + + def with_repo(_ctx) do + _pid = start_link_supervised!(Repo) + Ecto.Adapters.SQL.Sandbox.mode(Repo, :manual) + + :ok = Ecto.Adapters.SQL.Sandbox.checkout(Repo) + + [repo: Repo] + end + + defp changeset_id(changeset), do: Changeset.fetch_field!(changeset, :id) + + defp notify({pid, ref}, msg) when is_pid(pid) do + notify(pid, {ref, msg}) + end + + defp notify(pid, msg) when is_pid(pid) do + send(pid, msg) + end + + def todo_changeset(todo, :delete, _data, pid) do + notify(pid, {:todo, :changeset, :delete, todo.id}) + todo + end + + def todo_changeset(todo, data, action, pid) when action in [:insert, :update, :delete] do + todo + |> Changeset.cast(data, [:id, :title, :completed]) + |> tap(¬ify(pid, {:todo, :changeset, action, changeset_id(&1)})) + end + + def delete_changeset(todo, pid) do + notify(pid, {:todo, :delete, todo}) + todo + end + + def todo_insert_changeset(todo, data, pid) do + todo + |> Changeset.cast(data, [:id, :title, :completed]) + |> Changeset.validate_required([:id, :title, :completed]) + |> tap(¬ify(pid, {:todo, :insert, changeset_id(&1)})) + end + + def todo_update_changeset(todo, data, pid) do + notify(pid, {:todo, :update, todo.id, data}) + + todo + |> Changeset.cast(data, [:id, :title, :completed]) + |> Changeset.validate_required([:id, :title, :completed]) + end + + def todo_delete_changeset(todo, data, pid) do + notify(pid, {:todo, :delete, todo.id}) + + todo + |> Changeset.cast(data, [:id]) + |> Changeset.validate_required([:id]) + end + + def todo_pre_apply_insert(multi, changeset, _changes, pid) do + notify(pid, {:todo, :pre_apply_insert, changeset_id(changeset)}) + multi + end + + def todo_pre_apply_update(multi, changeset, _changes, pid) do + notify(pid, {:todo, :pre_apply_update, changeset_id(changeset)}) + + multi + end + + def todo_pre_apply_delete(multi, changeset, _changes, pid) do + notify(pid, {:todo, :pre_apply_delete, changeset_id(changeset)}) + multi + end + + def todo_post_apply_insert(multi, changeset, _changes, pid) do + notify(pid, {:todo, :post_apply_insert, changeset_id(changeset)}) + multi + end + + def todo_post_apply_update(multi, changeset, _changes, pid) do + notify(pid, {:todo, :post_apply_update, changeset_id(changeset)}) + multi + end + + def todo_post_apply_delete(multi, changeset, _changes, pid) do + notify(pid, {:todo, :post_apply_delete, changeset_id(changeset)}) + multi + end + + def todo_get(%{"id" => id} = _change, pid) do + notify(pid, {:todo, :get, String.to_integer(id)}) + Repo.get_by(Support.Todo, id: id) + end + + def todo_check(operation, pid) do + notify(pid, {:todo, :check, operation}) + :ok + end + + def writer do + Writer.new() + end + + def ingest(writer, changes, opts \\ [format: Writer.Format.TanstackOptimistic]) do + Writer.ingest(writer, changes, opts) + end + + def todo_get_tuple(data, pid) do + case todo_get(data, pid) do + nil -> + {:error, "custom error message"} + + %_{} = todo -> + {:ok, todo} + end + end + + defmodule TodoNoChangeset do + use Ecto.Schema + + schema "todos" do + field :title, :string + field :completed, :boolean + end + end + + # new creates without applying (so doesn't need a repo) + # apply creates and applies i.e. new() |> Repo.transaction() + describe "allow/2" do + test "accepts a schema and changeset fun", _ctx do + assert %Writer{} = + Writer.allow(writer(), TodoNoChangeset, validate: &todo_changeset(&1, &2, &3, nil)) + + assert %Writer{} = + Writer.allow(writer(), TodoNoChangeset, validate: &todo_changeset(&1, &2, &3, nil)) + end + + test "rejects non-schema module" do + assert_raise ArgumentError, fn -> + Writer.allow(writer(), __MODULE__) + end + end + + test "allows for complete configuration of behaviour", _ctx do + pid = self() + + assert %Writer{} = + Writer.allow( + writer(), + Support.Todo, + table: "todos_local", + # defaults to Repo.get!(Todo, ) + load: &todo_get(&1, pid), + accept: [:insert, :update, :delete], + check: &todo_check(&1, pid), + insert: [ + validate: &todo_insert_changeset(&1, &2, pid), + post_apply: &todo_post_apply_insert(&1, &2, &3, pid), + pre_apply: &todo_pre_apply_insert(&1, &2, &3, pid) + ], + update: [ + validate: &todo_update_changeset(&1, &2, pid), + post_apply: &todo_post_apply_update(&1, &2, &3, pid), + pre_apply: &todo_pre_apply_update(&1, &2, &3, pid) + ], + delete: [ + validate: &todo_delete_changeset(&1, &2, pid), + post_apply: &todo_post_apply_delete(&1, &2, &3, pid), + pre_apply: &todo_pre_apply_delete(&1, &2, &3, pid) + ] + ) + end + end + + defp with_todos(%{repo: repo}) do + Repo.query!( + "create table todos (id int8 not null primary key, title text not null, completed boolean not null default false)", + [] + ) + + repo.insert_all(Support.Todo, [ + [id: 1, title: "First todo", completed: false], + [id: 2, title: "Second todo", completed: true] + ]) + + :ok + end + + describe "transaction/2" do + setup [:with_repo, :with_todos] + + setup do + pid = self() + + writer_config_todo = + [ + table: "todos_local", + load: &todo_get(&1, pid), + accept: [:insert, :update, :delete], + check: &todo_check(&1, pid), + insert: [ + validate: &todo_insert_changeset(&1, &2, pid), + post_apply: &todo_post_apply_insert(&1, &2, &3, pid), + pre_apply: &todo_pre_apply_insert(&1, &2, &3, pid) + ], + update: [ + validate: &todo_update_changeset(&1, &2, pid), + post_apply: &todo_post_apply_update(&1, &2, &3, pid), + pre_apply: &todo_pre_apply_update(&1, &2, &3, pid) + ], + delete: [ + validate: &todo_delete_changeset(&1, &2, pid), + post_apply: &todo_post_apply_delete(&1, &2, &3, pid), + pre_apply: &todo_pre_apply_delete(&1, &2, &3, pid) + ] + ] + + writer = Writer.allow(writer(), Support.Todo, writer_config_todo) + + [writer: writer, writer_config_todo: writer_config_todo] + end + + test "has sensible defaults for load and changeset functions" do + writer = writer() |> Writer.allow(Support.Todo) + + changes = [ + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos"]}, + "modified" => %{"id" => "98", "title" => "New todo", "completed" => "false"} + } + ] + + assert {:ok, _txid, _values} = writer |> ingest(changes) |> Writer.transaction(Repo) + end + + test "writes valid changes", ctx do + changes = [ + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "modified" => %{"id" => "98", "title" => "New todo", "completed" => "false"} + }, + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "modified" => %{"id" => "99", "title" => "Disposable todo", "completed" => "false"} + }, + %{ + "type" => "delete", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "2"} + }, + %{ + "type" => "update", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "1", "title" => "First todo", "completed" => "false"}, + "changes" => %{"title" => "Changed title"} + }, + %{ + "type" => "update", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "1", "title" => "Changed title", "completed" => "false"}, + "changes" => %{"completed" => "true"} + }, + %{ + "type" => "delete", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "99", "title" => "New todo", "completed" => "false"} + }, + %{ + "type" => "update", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "98", "title" => "Working todo", "completed" => "false"}, + "changes" => %{"title" => "Working todo", "completed" => "true"} + } + ] + + writer = ingest(ctx.writer, changes) + + assert %Ecto.Multi{} = multi = Writer.to_multi(writer) + assert {:ok, txid, _values} = Writer.transaction(multi, Repo) + + assert is_integer(txid) + + # validate that the callbacks are being called + + # assert_receive {:todo, :get, 99} + # assert_receive {:todo, :get, 98} + # we don't call the load function for inserts + refute_receive {:todo, :get, 98}, 10 + refute_receive {:todo, :get, 99}, 10 + assert_receive {:todo, :get, 2} + assert_receive {:todo, :get, 1} + + assert_receive {:todo, :insert, 98} + assert_receive {:todo, :pre_apply_insert, 98} + assert_receive {:todo, :post_apply_insert, 98} + + assert_receive {:todo, :insert, 99} + assert_receive {:todo, :pre_apply_insert, 99} + assert_receive {:todo, :post_apply_insert, 99} + + assert_receive {:todo, :delete, 2} + assert_receive {:todo, :pre_apply_delete, 2} + assert_receive {:todo, :post_apply_delete, 2} + + assert_receive {:todo, :update, 1, %{"completed" => "true"}} + assert_receive {:todo, :pre_apply_update, 1} + assert_receive {:todo, :post_apply_update, 1} + + assert_receive {:todo, :update, 1, %{"title" => "Changed title"}} + assert_receive {:todo, :pre_apply_update, 1} + assert_receive {:todo, :post_apply_update, 1} + + assert_receive {:todo, :delete, 99} + assert_receive {:todo, :pre_apply_delete, 99} + assert_receive {:todo, :post_apply_delete, 99} + + assert_receive {:todo, :update, 98, %{"title" => "Working todo", "completed" => "true"}} + assert_receive {:todo, :pre_apply_update, 98} + assert_receive {:todo, :post_apply_update, 98} + + assert [ + %Support.Todo{id: 1, title: "Changed title", completed: true}, + %Support.Todo{id: 98, title: "Working todo", completed: true} + ] = ctx.repo.all(from(t in Support.Todo, order_by: t.id)) + end + + test "always validates that pk columns are included in all mutations", ctx do + changes = [ + %{ + "type" => "update", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"title" => "First todo", "completed" => "false"}, + "changes" => %{"title" => "Changed title"} + } + ] + + assert {:error, _, _, _} = ctx.writer |> ingest(changes) |> Writer.transaction(Repo) + end + + test "allows for a generic changeset/3 for all mutations", _ctx do + pid = self() + + writer = + Writer.allow(writer(), Support.Todo, + table: "todos_local", + load: &todo_get(&1, pid), + validate: &todo_changeset(&1, &2, &3, pid) + ) + + changes = [ + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "modified" => %{"id" => "98", "title" => "New todo", "completed" => "false"} + }, + %{ + "type" => "delete", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "2"} + }, + %{ + "type" => "update", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "1", "title" => "First todo", "completed" => "false"}, + "changes" => %{"title" => "Changed title"} + } + ] + + assert {:ok, _txid, _changes} = writer |> ingest(changes) |> Writer.transaction(Repo) + + assert_receive {:todo, :changeset, :insert, 98} + assert_receive {:todo, :changeset, :delete, 2} + assert_receive {:todo, :changeset, :update, 1} + end + + test "returns an error if original record is not found" do + pid = self() + + writer = + Writer.allow(writer(), Support.Todo, + table: "todos_local", + load: &todo_get(&1, pid), + validate: &todo_changeset(&1, &2, &3, pid) + ) + + changes = [ + %{ + "type" => "update", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "111111", "title" => "First todo", "completed" => "false"}, + "changes" => %{"title" => "Changed title"} + } + ] + + assert {:error, {:__phoenix_sync__, :changeset, 0}, %Writer.Error{}, _changes} = + writer |> ingest(changes) |> Writer.transaction(Repo) + end + + test "rejects updates not in :accept list", _ctx do + pid = self() + + writer = + Writer.allow(writer(), Support.Todo, + table: "todos_local", + load: &todo_get(&1, pid), + accept: [:insert, :update], + validate: &todo_changeset(&1, &2, &3, pid) + ) + + changes = [ + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "modified" => %{"id" => "98", "title" => "New todo", "completed" => "false"} + }, + %{ + "type" => "delete", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "2"} + }, + %{ + "type" => "update", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "1", "title" => "First todo", "completed" => "false"}, + "changes" => %{"title" => "Changed title"} + } + ] + + assert {:error, :check, %Writer.Error{}, _changes} = + writer |> ingest(changes) |> Writer.transaction(Repo) + end + + test "rejects any txn that fails the check test" do + pid = self() + + writer = + Writer.allow(writer(), Support.Todo, + table: "todos_local", + load: &todo_get(&1, pid), + check: fn + %{operation: :delete} -> {:error, "no deletes!"} + _op -> :ok + end, + validate: &todo_changeset(&1, &2, &3, pid) + ) + + changes = [ + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "modified" => %{"id" => "98", "title" => "New todo", "completed" => "false"} + }, + %{ + "type" => "delete", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "2"} + }, + %{ + "type" => "update", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "1", "title" => "First todo", "completed" => "false"}, + "changes" => %{"title" => "Changed title"} + } + ] + + assert {:error, :check, %Writer.Error{message: "no deletes!"}, _changes} = + writer |> ingest(changes) |> Writer.transaction(Repo) + end + + test "supports accepting writes on multiple tables", _ctx do + pid = self() + todo1_ref = make_ref() + todo2_ref = make_ref() + + writer = + writer() + |> Writer.allow(Support.Todo, + load: &todo_get(&1, pid), + validate: &todo_changeset(&1, &2, &3, {pid, todo1_ref}), + insert: [post_apply: &todo_post_apply_insert(&1, &2, &3, {pid, todo1_ref})] + ) + |> Writer.allow(Support.Todo, + table: "todos_2", + load: &todo_get(&1, pid), + validate: &todo_changeset(&1, &2, &3, {pid, todo2_ref}), + insert: [post_apply: &todo_post_apply_insert(&1, &2, &3, {pid, todo2_ref})] + ) + + changes = [ + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos"]}, + "modified" => %{"id" => "98", "title" => "New todo1", "completed" => "false"} + }, + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos_2"]}, + "modified" => %{"id" => "99", "title" => "New todo2", "completed" => "false"} + } + ] + + assert {:ok, _txid, _changes} = + writer |> ingest(changes) |> Writer.transaction(Repo) + + assert_receive {^todo1_ref, {:todo, :changeset, :insert, 98}} + assert_receive {^todo1_ref, {:todo, :post_apply_insert, 98}} + + assert_receive {^todo2_ref, {:todo, :changeset, :insert, 99}} + assert_receive {^todo2_ref, {:todo, :post_apply_insert, 99}} + end + + test "is intelligent about mapping client tables to server", _ctx do + pid = self() + + writer = + writer() + |> Writer.allow(Support.Todo, + load: &todo_get(&1, pid), + validate: &todo_changeset(&1, &2, &3, pid), + insert: [post_apply: &todo_post_apply_insert(&1, &2, &3, pid)] + ) + + changes = [ + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos"]}, + "modified" => %{"id" => "98", "title" => "New todo1", "completed" => "false"} + }, + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["client", "todos"]}, + "modified" => %{"id" => "99", "title" => "New todo2", "completed" => "false"} + } + ] + + assert {:ok, _txid, _changes} = writer |> ingest(changes) |> Writer.transaction(Repo) + + assert_receive {:todo, :changeset, :insert, 98} + assert_receive {:todo, :post_apply_insert, 98} + + assert_receive {:todo, :changeset, :insert, 99} + assert_receive {:todo, :post_apply_insert, 99} + end + + test "only matches full relation if configured", _ctx do + pid = self() + + writer = + writer() + |> Writer.allow(Support.Todo, + table: ["public", "todos"], + load: &todo_get(&1, pid), + validate: &todo_changeset(&1, &2, &3, pid), + insert: [post_apply: &todo_post_apply_insert(&1, &2, &3, pid)] + ) + + changes = [ + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos"]}, + "modified" => %{"id" => "98", "title" => "New todo1", "completed" => "false"} + }, + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["client", "todos"]}, + "modified" => %{"id" => "99", "title" => "New todo2", "completed" => "false"} + } + ] + + # we have specified allow/2 with a fully qualified table so only one of the + # inserts matches + assert {:error, :check, %Writer.Error{}, _changes} = + writer |> ingest(changes) |> Writer.transaction(Repo) + end + + test "allows for 1-arity delete changeset functions", _ctx do + pid = self() + + writer = + Writer.allow(writer(), Support.Todo, + table: "todos_local", + load: &todo_get(&1, pid), + delete: [ + validate: &delete_changeset(&1, pid) + ] + ) + + changes = [ + %{ + "type" => "delete", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "2"} + } + ] + + assert {:ok, _txid, _changes} = writer |> ingest(changes) |> Writer.transaction(Repo) + + assert_receive {:todo, :delete, %Support.Todo{id: 2}} + end + + test "allows for a generic pre_apply/3 and post_apply/3 for all mutations" do + pid = self() + + writer = + Writer.allow(writer(), Support.Todo, + table: "todos_local", + load: &todo_get(&1, pid), + pre_apply: fn multi, changeset, ctx -> + send(pid, {:pre_apply, ctx.operation.operation, changeset_id(changeset)}) + multi + end, + post_apply: fn multi, changeset, ctx -> + send(pid, {:post_apply, ctx.operation.operation, changeset_id(changeset)}) + multi + end + ) + + changes = [ + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "modified" => %{"id" => "98", "title" => "New todo", "completed" => "false"} + }, + %{ + "type" => "delete", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "2"} + }, + %{ + "type" => "update", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "1", "title" => "First todo", "completed" => "false"}, + "changes" => %{"title" => "Changed title"} + } + ] + + assert {:ok, _txid, _changes} = writer |> ingest(changes) |> Writer.transaction(Repo) + + assert_receive {:pre_apply, :insert, 98} + assert_receive {:pre_apply, :delete, 2} + assert_receive {:pre_apply, :update, 1} + assert_receive {:post_apply, :insert, 98} + assert_receive {:post_apply, :delete, 2} + assert_receive {:post_apply, :update, 1} + end + + test "supports custom mutation message format", ctx do + pid = self() + + changes = [ + %{ + "perform" => "insert", + "relation" => ["public", "todos"], + "updates" => %{"id" => "98", "title" => "New todo", "completed" => "false"} + }, + %{ + "perform" => "delete", + "relation" => ["public", "todos"], + "value" => %{"id" => "2"} + }, + %{ + "perform" => "update", + "relation" => ["public", "todos"], + "value" => %{"id" => "1", "title" => "First todo", "completed" => "false"}, + "updates" => %{"title" => "Changed title"} + } + ] + + assert {:ok, _txid, _changes} = + Writer.new() + |> Writer.allow(Support.Todo, load: &todo_get(&1, pid)) + |> Writer.ingest(changes, parser: &parse_transaction/1) + |> Writer.transaction(Repo) + + assert [ + %Support.Todo{id: 1, title: "Changed title", completed: false}, + %Support.Todo{id: 98, title: "New todo", completed: false} + ] = ctx.repo.all(from(t in Support.Todo, order_by: t.id)) + end + + test "supports custom mutation message format via mfa", ctx do + pid = self() + + changes = [ + %{ + "perform" => "insert", + "relation" => ["public", "todos"], + "updates" => %{"id" => "98", "title" => "New todo", "completed" => "false"} + }, + %{ + "perform" => "delete", + "relation" => ["public", "todos"], + "value" => %{"id" => "2"} + }, + %{ + "perform" => "update", + "relation" => ["public", "todos"], + "value" => %{"id" => "1", "title" => "First todo", "completed" => "false"}, + "updates" => %{"title" => "Changed title"} + } + ] + + assert {:ok, _txid, _changes} = + Writer.new() + |> Writer.allow(Support.Todo, load: &todo_get(&1, pid)) + |> Writer.ingest(changes, parser: {__MODULE__, :parse_transaction, []}) + |> Writer.transaction(Repo) + + assert [ + %Support.Todo{id: 1, title: "Changed title", completed: false}, + %Support.Todo{id: 98, title: "New todo", completed: false} + ] = ctx.repo.all(from(t in Support.Todo, order_by: t.id)) + end + + test "uses data in the txn if it exists" do + # if an update applies to a previously inserted value + # then rather than use the load fun and retrieve the value + # we can re-use the value in the multi change data + pid = self() + + changes = [ + %{ + "perform" => "insert", + "relation" => ["public", "todos"], + "updates" => %{"id" => "98", "title" => "New todo", "completed" => "false"} + }, + %{ + "perform" => "update", + "relation" => ["public", "todos"], + "value" => %{"id" => "98"}, + "updates" => %{"title" => "Changed title"} + }, + %{ + "perform" => "update", + "relation" => ["public", "todos"], + "value" => %{"id" => "98"}, + "updates" => %{"title" => "Changed again", "completed" => true} + } + ] + + assert {:ok, _txid, _changes} = + Writer.new() + |> Writer.allow(Support.Todo, load: &todo_get(&1, pid)) + |> Writer.ingest(changes, parser: {__MODULE__, :parse_transaction, []}) + |> Writer.transaction(Repo) + + refute_receive {:todo, :get, 98}, 50 + + ## if we delete in the txn then we know it doesn't exist + + changes = [ + %{ + "perform" => "insert", + "relation" => ["public", "todos"], + "updates" => %{"id" => "99", "title" => "New todo", "completed" => "false"} + }, + %{ + "perform" => "update", + "relation" => ["public", "todos"], + "value" => %{"id" => "99"}, + "updates" => %{"title" => "Changed title"} + }, + %{ + "perform" => "delete", + "relation" => ["public", "todos"], + "value" => %{"id" => "99"} + }, + %{ + "perform" => "update", + "relation" => ["public", "todos"], + "value" => %{"id" => "99"}, + "updates" => %{"title" => "Changed title", "completed" => true} + } + ] + + assert {:error, _txid, _, _changes} = + Writer.new() + |> Writer.allow(Support.Todo, load: &todo_get(&1, pid)) + |> Writer.ingest(changes, parser: {__MODULE__, :parse_transaction, []}) + |> Writer.transaction(Repo) + end + + test "allows for custom errors from load fun", _ctx do + pid = self() + + changes1 = [ + %{ + "perform" => "update", + "relation" => ["public", "todos"], + "value" => %{"id" => "1", "title" => "First todo", "completed" => "false"}, + "updates" => %{"title" => "Changed title"} + } + ] + + assert {:ok, _txid, _changes} = + Writer.new() + |> Writer.allow(Support.Todo, load: &todo_get_tuple(&1, pid)) + |> Writer.ingest(changes1, parser: {__MODULE__, :parse_transaction, []}) + |> Writer.transaction(Repo) + + changes2 = [ + %{ + "perform" => "update", + "relation" => ["public", "todos"], + "value" => %{"id" => "1001", "title" => "First todo", "completed" => "false"}, + "updates" => %{"title" => "Changed title"} + } + ] + + assert {:error, _, %Writer.Error{message: "custom error message"}, _} = + Writer.new() + |> Writer.allow(Support.Todo, load: &todo_get_tuple(&1, pid)) + |> Writer.ingest(changes2, parser: {__MODULE__, :parse_transaction, []}) + |> Writer.transaction(Repo) + end + + test "before_all", _ctx do + pid = self() + ref = make_ref() + counter = :atomics.new(1, signed: false) + + changes = [ + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos"]}, + "modified" => %{"id" => "98", "title" => "New todo", "completed" => "false"} + }, + %{ + "type" => "delete", + "syncMetadata" => %{"relation" => ["public", "todos"]}, + "original" => %{"id" => "2"} + }, + %{ + "type" => "update", + "syncMetadata" => %{"relation" => ["public", "todos"]}, + "original" => %{"id" => "1", "title" => "First todo", "completed" => "false"}, + "changes" => %{"title" => "Changed title"} + } + ] + + assert {:ok, _txid, %{before_all_todo: ^ref}} = + writer() + |> Writer.allow(Support.Todo, + load: &todo_get(&1, pid), + before_all: fn multi -> + :atomics.add(counter, 1, 1) + Ecto.Multi.put(multi, :before_all_todo, ref) + end + ) + |> Writer.apply(changes, Repo, format: Writer.Format.TanstackOptimistic) + + assert 1 == :atomics.get(counter, 1) + end + end + + describe "txid/1" do + setup [:with_repo, :with_todos] + + test "returns the txid", _ctx do + pid = self() + + writer = + writer() + |> Writer.allow(Support.Todo, + load: &todo_get(&1, pid), + validate: &todo_changeset(&1, &2, &3, pid), + insert: [post_apply: &todo_post_apply_insert(&1, &2, &3, pid)] + ) + + changes = [ + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos"]}, + "modified" => %{"id" => "98", "title" => "New todo1", "completed" => "false"} + } + ] + + assert {:ok, changes} = writer |> ingest(changes) |> Writer.to_multi() |> Repo.transaction() + + assert {:ok, txid} = Writer.txid(changes) + + assert is_integer(txid) + + assert txid == Writer.txid!(changes) + end + end + + describe "transact/3" do + setup [:with_repo, :with_todos] + + test "supports any unparsed mutation data" do + changes = [ + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "modified" => %{"id" => "98", "title" => "New todo", "completed" => "false"} + }, + %{ + "type" => "insert", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "modified" => %{"id" => "99", "title" => "Disposable todo", "completed" => "false"} + }, + %{ + "type" => "delete", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "2"} + }, + %{ + "type" => "update", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "1", "title" => "First todo", "completed" => "false"}, + "changes" => %{"title" => "Changed title"} + }, + %{ + "type" => "update", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "1", "title" => "Changed title", "completed" => "false"}, + "changes" => %{"completed" => "true"} + }, + %{ + "type" => "delete", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "99", "title" => "New todo", "completed" => "false"} + }, + %{ + "type" => "update", + "syncMetadata" => %{"relation" => ["public", "todos_local"]}, + "original" => %{"id" => "98", "title" => "Working todo", "completed" => "false"}, + "changes" => %{"title" => "Working todo", "completed" => "true"} + } + ] + + parent = self() + + assert {:ok, txid} = + Writer.transact( + changes, + Repo, + fn + %Writer.Operation{ + operation: op, + relation: relation, + data: data, + changes: changes + } -> + send(parent, {op, relation, data, changes}) + :ok + end, + format: Writer.Format.TanstackOptimistic + ) + + assert is_integer(txid) + assert_receive {:insert, ["public", "todos_local"], %{}, %{"id" => "98"}} + assert_receive {:insert, ["public", "todos_local"], %{}, %{"id" => "99"}} + assert_receive {:delete, ["public", "todos_local"], %{"id" => "2"}, %{}} + assert_receive {:update, ["public", "todos_local"], %{"id" => "1"}, %{"title" => _}} + assert_receive {:update, ["public", "todos_local"], %{"id" => "1"}, %{"completed" => _}} + assert_receive {:delete, ["public", "todos_local"], %{"id" => "99"}, %{}} + + assert_receive {:update, ["public", "todos_local"], %{"id" => "98"}, + %{"title" => _, "completed" => _}} + end + + test "parse errors return an error tuple" do + assert {:error, "no"} = + Writer.transact([], Repo, fn _ -> :ok end, parser: fn _ -> {:error, "no"} end) + end + + test "changes are applied" do + changes = [ + {:insert, %Support.Todo{id: 99, title: "New todo 99"}}, + {:insert, %Support.Todo{id: 100, title: "New todo 100"}} + ] + + assert {:ok, txid} = + Writer.transact( + changes, + Repo, + fn + {:insert, todo} -> + Repo.insert(todo) + end, + parser: fn ops -> {:ok, %Writer.Transaction{operations: ops}} end + ) + + assert [ + %Support.Todo{id: 1, title: _, completed: false}, + %Support.Todo{id: 2, title: _, completed: true}, + %Support.Todo{id: 99, title: _, completed: false}, + %Support.Todo{id: 100, title: _, completed: false} + ] = Repo.all(from(t in Support.Todo, order_by: t.id)) + + assert is_integer(txid) + end + + test ":error tuples rollback the transaction" do + changes = [ + {:insert, %Support.Todo{id: 99, title: "New todo"}}, + {:error, "reject"} + ] + + assert {:error, "reject"} = + Writer.transact( + changes, + Repo, + fn + {:insert, todo} -> + Repo.insert(todo) + + {:error, _} = error -> + error + end, + parser: fn ops -> {:ok, %Writer.Transaction{operations: ops}} end + ) + + assert [ + %Support.Todo{id: 1, title: _, completed: false}, + %Support.Todo{id: 2, title: _, completed: true} + ] = Repo.all(from(t in Support.Todo, order_by: t.id)) + end + end + + def parse_transaction(m) when is_list(m) do + with {:ok, operations} <- + Writer.Transaction.parse_operations(m, fn op -> + Writer.Operation.new(op["perform"], op["relation"], op["value"], op["updates"]) + end) do + Writer.Transaction.new(operations) + end + end +end diff --git a/test/support/todo.ex b/test/support/todo.ex index 041a172..e71fc67 100644 --- a/test/support/todo.ex +++ b/test/support/todo.ex @@ -1,8 +1,16 @@ defmodule Support.Todo do use Ecto.Schema + import Ecto.Changeset + schema "todos" do field :title, :string field :completed, :boolean end + + def changeset(todo, data) do + todo + |> cast(data, [:id, :title, :completed]) + |> validate_required([:id, :title]) + end end