Skip to content

Commit ba2a9fd

Browse files
committed
Migrations structure
1 parent 4b30f69 commit ba2a9fd

10 files changed

+658
-583
lines changed

doc/platform/ddl_dml/centralized_migrations_tt.rst

Lines changed: 0 additions & 531 deletions
This file was deleted.

doc/platform/ddl_dml/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ This section contains guides on performing data operations in Tarantool.
3232
value_store
3333
schema_desc
3434
operations
35-
migrations
35+
migrations/index
3636
read_views
3737
sql/index
3838

Lines changed: 299 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,299 @@
1+
.. _basic_migrations_tt:
2+
3+
Basic tt migrations tutorial
4+
============================
5+
6+
**Example on GitHub:** `migrations <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/migrations>`_
7+
8+
In this tutorial, you learn to define the cluster data schema using the centralized
9+
migration management mechanism implemented in the Enterprise Edition of the :ref:`tt <tt-cli>` utility.
10+
11+
.. _basic_migrations_tt_prereq:
12+
13+
Prerequisites
14+
-------------
15+
16+
Before starting this tutorial:
17+
18+
- Download and :ref:`install Tarantool Enterprise SDK <enterprise-install>`.
19+
- Install `etcd <https://etcd.io/>`__.
20+
21+
.. _basic_migrations_tt_cluster:
22+
23+
Preparing a cluster
24+
-------------------
25+
26+
The centralized migration mechanism works with Tarantool EE clusters that:
27+
28+
- use etcd as a centralized configuration storage
29+
- use the `CRUD <https://github.com/tarantool/crud>`__ module for data sharding
30+
31+
.. _basic_migrations_tt_cluster_etcd:
32+
33+
Setting up etcd
34+
~~~~~~~~~~~~~~~
35+
36+
First, start up an etcd instance to use as a configuration storage:
37+
38+
.. code-block:: console
39+
40+
$ etcd
41+
42+
etcd runs on the default port 2379.
43+
44+
Optionally, enable etcd authentication by executing the following script:
45+
46+
.. code-block:: bash
47+
48+
#!/usr/bin/env bash
49+
50+
etcdctl user add root:topsecret
51+
etcdctl role add app_config_manager
52+
etcdctl role grant-permission app_config_manager --prefix=true readwrite /myapp/
53+
etcdctl user add app_user:config_pass
54+
etcdctl user grant-role app_user app_config_manager
55+
etcdctl auth enable
56+
57+
It creates an etcd user ``app_user`` with read and write permissions to the ``/myapp``
58+
prefix, in which the cluster configuration will be stored. The user's password is ``config_pass``.
59+
60+
.. note::
61+
62+
If you don't enable etcd authentication, make ``tt migrations`` calls without
63+
the configuration storage credentials.
64+
65+
.. _basic_migrations_tt_cluster_create:
66+
67+
Creating a cluster
68+
~~~~~~~~~~~~~~~~~~
69+
70+
#. Initialize a ``tt`` environment:
71+
72+
.. code-block:: console
73+
74+
$ tt init
75+
76+
#. In the ``instances.enabled`` directory, create the ``myapp`` directory.
77+
#. Go to the ``instances.enabled/myapp`` directory and create application files:
78+
79+
- ``instances.yml``:
80+
81+
.. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/instances.yml
82+
:language: yaml
83+
:dedent:
84+
85+
- ``config.yaml``:
86+
87+
.. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/config.yaml
88+
:language: yaml
89+
:dedent:
90+
91+
- ``myapp-scm-1.rockspec``:
92+
93+
.. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/myapp-scm-1.rockspec
94+
:dedent:
95+
96+
4. Create the ``source.yaml`` with a cluster configuration to publish to etcd:
97+
98+
.. note::
99+
100+
This configuration describes a typical CRUD-enabled sharded cluster with
101+
one router and two storage replica sets, each including one master and one read-only replica.
102+
103+
.. literalinclude:: /code_snippets/snippets/migrations/instances.enabled/myapp/source.yaml
104+
:language: yaml
105+
:dedent:
106+
107+
#. Publish the configuration to etcd:
108+
109+
.. code-block:: console
110+
111+
$ tt cluster publish "http://app_user:config_pass@localhost:2379/myapp/" source.yaml
112+
113+
The full cluster code is available on GitHub here: `migrations <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/migrations/instances.enabled/myapp>`_.
114+
115+
.. _basic_migrations_tt_cluster_start:
116+
117+
Building and starting the cluster
118+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
119+
120+
#. Build the application:
121+
122+
.. code-block:: console
123+
124+
$ tt build myapp
125+
126+
#. Start the cluster:
127+
128+
.. code-block:: console
129+
130+
$ tt start myapp
131+
132+
To check that the cluster is up and running, use ``tt status``:
133+
134+
.. code-block:: console
135+
136+
$ tt status myapp
137+
138+
#. Bootstrap vshard in the cluster:
139+
140+
.. code-block:: console
141+
142+
$ tt replicaset vshard bootstrap myapp
143+
144+
.. _basic_migrations_tt_write:
145+
146+
Writing migrations
147+
------------------
148+
149+
To perform migrations in the cluster, write them in Lua and publish to the cluster's
150+
etcd configuration storage.
151+
152+
Each migration file must return a Lua table with one object named ``apply``.
153+
This object has one field -- ``scenario`` -- that stores the migration function:
154+
155+
.. code-block:: lua
156+
157+
local function apply_scenario()
158+
-- migration code
159+
end
160+
161+
return {
162+
apply = {
163+
scenario = apply_scenario,
164+
},
165+
}
166+
167+
The migration unit is a single file: its ``scenario`` is executed as a whole. An error
168+
that happens in any step of the ``scenario`` causes the entire migration to fail.
169+
170+
Migrations are executed in the lexicographical order. Thus, it's convenient to
171+
use filenames that start with ordered numbers to define the migrations order, for example:
172+
173+
.. code-block:: text
174+
175+
000001_create_space.lua
176+
000002_create_index.lua
177+
000003_alter_space.lua
178+
179+
The default location where ``tt`` searches for migration files is ``/migrations/scenario``.
180+
Create this subdirectory inside the ``tt`` environment. Then, create two migration files:
181+
182+
- ``000001_create_writers_space.lua``: create a space, define its format, and
183+
create a primary index.
184+
185+
.. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000001_create_writers_space.lua
186+
:language: lua
187+
:dedent:
188+
189+
.. note::
190+
191+
Note the usage of the ``tt-migrations.helpers`` module.
192+
In this example, its function ``register_sharding_key`` is used
193+
to define a sharding key for the space.
194+
195+
- ``000002_create_writers_index.lua``: add one more index.
196+
197+
.. literalinclude:: /code_snippets/snippets/migrations/migrations/scenario/000002_create_writers_index.lua
198+
:language: lua
199+
:dedent:
200+
201+
.. _basic_migrations_tt_publish:
202+
203+
Publishing migrations
204+
---------------------
205+
206+
To publish migrations to the etcd configuration storage, run ``tt migrations publish``:
207+
208+
.. code-block:: console
209+
210+
$ tt migrations publish http://app_user:config_pass@localhost:2379/myapp
211+
• 000001_create_writes_space.lua: successfully published to key "000001_create_writes_space.lua"
212+
• 000002_create_writers_index.lua: successfully published to key "000002_create_writers_index.lua"
213+
214+
.. _basic_migrations_tt_apply:
215+
216+
Applying migrations
217+
-------------------
218+
219+
To apply published migrations to the cluster, run ``tt migrations apply`` providing
220+
a cluster user's credentials:
221+
222+
.. code-block:: console
223+
224+
$ tt migrations apply http://app_user:config_pass@localhost:2379/myapp \
225+
--tarantool-username=client --tarantool-password=secret
226+
227+
.. important::
228+
229+
The cluster user must have enough access privileges to execute the migrations code.
230+
231+
The output should look as follows:
232+
233+
.. code-block:: console
234+
235+
• router-001:
236+
• 000001_create_writes_space.lua: successfully applied
237+
• 000002_create_writers_index.lua: successfully applied
238+
• storage-001:
239+
• 000001_create_writes_space.lua: successfully applied
240+
• 000002_create_writers_index.lua: successfully applied
241+
• storage-002:
242+
• 000001_create_writes_space.lua: successfully applied
243+
• 000002_create_writers_index.lua: successfully applied
244+
245+
The migrations are applied on all replica set leaders. Read-only replicas
246+
receive the changes from the corresponding replica set leaders.
247+
248+
Check the migrations status with ``tt migration status``:
249+
250+
.. code-block:: console
251+
252+
$ tt migrations status http://app_user:config_pass@localhost:2379/myapp --tarantool-username=client --tarantool-password=secret
253+
• migrations centralized storage scenarios:
254+
• 000001_create_writes_space.lua
255+
• 000002_create_writers_index.lua
256+
• migrations apply status on Tarantool cluster:
257+
• router-001:
258+
• 000001_create_writes_space.lua: APPLIED
259+
• 000002_create_writers_index.lua: APPLIED
260+
• storage-001:
261+
• 000001_create_writes_space.lua: APPLIED
262+
• 000002_create_writers_index.lua: APPLIED
263+
• storage-002:
264+
• 000001_create_writes_space.lua: APPLIED
265+
• 000002_create_writers_index.lua: APPLIED
266+
267+
To make sure that the space and indexes are created in the cluster, connect to the router
268+
instance and retrieve the space information:
269+
270+
.. code-block:: $ tt connect myapp:router-001
271+
272+
.. code-block:: tarantoolsession
273+
274+
myapp:router-001-a> require('crud').schema('writers')
275+
---
276+
- indexes:
277+
0:
278+
unique: true
279+
parts:
280+
- fieldno: 1
281+
type: number
282+
exclude_null: false
283+
is_nullable: false
284+
id: 0
285+
type: TREE
286+
name: primary
287+
2:
288+
unique: true
289+
parts:
290+
- fieldno: 4
291+
type: number
292+
exclude_null: false
293+
is_nullable: false
294+
id: 2
295+
type: TREE
296+
name: age
297+
format: [{'name': 'id', 'type': 'number'}, {'type': 'number', 'name': 'bucket_id',
298+
'is_nullable': true}, {'name': 'name', 'type': 'string'}, {'name': 'age', 'type': 'number'}]
299+
...
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
.. _centralized_migrations_tt:
2+
3+
Centralized migrations with tt
4+
==============================
5+
6+
**Example on GitHub:** `migrations <https://github.com/tarantool/doc/tree/latest/doc/code_snippets/snippets/migrations>`_
7+
8+
In this section, you learn to use the centralized migration management mechanism
9+
implemented in the Enterprise Edition of the :ref:`tt <tt-cli>` utility.
10+
11+
See also:
12+
13+
- :ref:`tt migrations reference <tt-migrations>` for the full list of command-line options.
14+
- :ref:`tcm_cluster_migrations` to learn about managing migrations from |tcm_full_name|.
15+
16+
.. toctree::
17+
:maxdepth: 1
18+
19+
basic_migrations_tt
20+
upgrade_migrations_tt
21+
extend_migrations_tt
22+
troubleshoot_migrations_tt

0 commit comments

Comments
 (0)