@@ -344,22 +344,12 @@ staging.ottg.co.uk : ok=3 changed=2 unreachable=0 failed=0
344
344
skipped=0 rescued=0 ignored=0
345
345
----
346
346
347
+ I don't know about you, but whenever I make a terminal spew out a stream
348
+ of output, I like to make little _brrp brrp brrp_ noises, a bit like the
349
+ computer Mother, in _Alien_.
350
+ Ansible scripts are particularly satisfying in this regard.
347
351
348
352
349
- ////
350
- old error message when trying to use elspeth user to run docker.
351
- this goes wrong because groups don't work immediately:
352
-
353
- TASK [Run test container] *****************************************************
354
- fatal: [192.168.56.10]: FAILED! => {"changed": false, "msg": "Error connecting:
355
- Error while fetching server API version: ('Connection aborted.',
356
- PermissionError(13, 'Permission denied'))"}
357
-
358
- waiting a few minutes fixes it
359
-
360
- for now i'll just put become:true
361
- ////
362
-
363
353
364
354
=== SSHing Into the Server and Viewing Container Logs
365
355
@@ -701,11 +691,117 @@ but I wanted to keep this (already long) chapter as simple as possible.
701
691
*******************************************************************************
702
692
703
693
694
+
695
+ Let's run the latest version of our playbook and see how our tests get on:
696
+
697
+
698
+ [subs="specialcharacters,quotes"]
699
+ ----
700
+ $ *ansible-playbook --user=elspeth -i staging.ottg.co.uk, infra/ansible-provision.yaml -v*
701
+ [...]
702
+ PLAYBOOK: ansible-provision.yaml **********************************************
703
+ 1 plays in infra/ansible-provision.yaml
704
+
705
+ PLAY [all] ********************************************************************
706
+
707
+ TASK [Gathering Facts] ********************************************************
708
+ ok: [staging.ottg.co.uk]
709
+
710
+ TASK [Install docker] *********************************************************
711
+ ok: [staging.ottg.co.uk] => {"cache_update_time": 1709136057, "cache_updated":
712
+ false, "changed": false}
713
+
714
+ TASK [Build container image locally] ******************************************
715
+ changed: [staging.ottg.co.uk -> 127.0.0.1] => {"actions": ["Built image [...]
716
+
717
+ TASK [Export container image locally] *****************************************
718
+ changed: [staging.ottg.co.uk -> 127.0.0.1] => {"actions": ["Archived image [...]
719
+
720
+ TASK [Upload image to server] *************************************************
721
+ changed: [staging.ottg.co.uk] => {"changed": true, [...]
722
+
723
+ TASK [Import container image on server] ***************************************
724
+ changed: [staging.ottg.co.uk] => {"actions": ["Loaded image [...]
725
+
726
+ TASK [Ensure .env file exists] ************************************************
727
+ changed: [staging.ottg.co.uk] => {"changed": true, [...]
728
+
729
+ TASK [Run container] **********************************************************
730
+ changed: [staging.ottg.co.uk] => {"changed": true, "container": [...]
731
+
732
+ PLAY RECAP ********************************************************************
733
+ staging.ottg.co.uk : ok=8 changed=6 unreachable=0 failed=0
734
+ skipped=0 rescued=0 ignored=0
735
+ ----
736
+
737
+ Looks good! What do our tests think?
738
+
704
739
==== More debugging
705
740
706
- forgot ports!
741
+ We run our tests as usual and run into a new problem:
742
+
743
+ [subs="specialcharacters,macros"]
744
+ ----
745
+ $ pass:quotes[*TEST_SERVER=staging.ottg.co.uk python manage.py test functional_tests*]
746
+ [...]
747
+ selenium.common.exceptions.WebDriverException: Message: Reached error page:
748
+ about:neterror?e=connectionFailure&u=http%3A//staging.ottg.co.uk/[...]
749
+ ----
750
+
751
+ That `neterror` makes me think it's another networking problem.
752
+ Let's try `curl` locally:
753
+
754
+
755
+ [subs="specialcharacters,macros"]
756
+ ----
757
+ $ pass:quotes[*curl -iv staging.ottg.co.uk*]
758
+ [...]
759
+ curl: (7) Failed to connect to staging.ottg.co.uk port 80 after 25 ms: Couldn't
760
+ connect to server
761
+ ----
762
+
763
+ Now let's ssh in and try `curl` from the server itself:
764
+
765
+ [subs="specialcharacters,quotes"]
766
+ ----
767
+ elspeth@server$ *docker logs superlists*
768
+ [2024-02-28 22:14:43 +0000] [7] [INFO] Starting gunicorn 21.2.0
769
+ [2024-02-28 22:14:43 +0000] [7] [INFO] Listening at: http://0.0.0.0:8888 (7)
770
+ [2024-02-28 22:14:43 +0000] [7] [INFO] Using worker: sync
771
+ [2024-02-28 22:14:43 +0000] [8] [INFO] Booting worker with pid: 8
772
+ ----
773
+
774
+ No errors in the logs...
775
+
776
+ [subs="specialcharacters,quotes"]
777
+ ----
778
+ elspeth@server$ *curl -iv localhost*
779
+ * Trying 127.0.0.1:80...
780
+ * connect to 127.0.0.1 port 80 failed: Connection refused
781
+ * Trying ::1:80...
782
+ * connect to ::1 port 80 failed: Connection refused
783
+ * Failed to connect to localhost port 80 after 0 ms: Connection refused
784
+ * Closing connection 0
785
+ curl: (7) Failed to connect to localhost port 80 after 0 ms: Connection refused
786
+ ----
707
787
708
- show ssh, curl localhosts maybe.
788
+ Hmm, `curl` fails on the server too.
789
+ But all this talk of `port 80`, both locally and on the server, might be giving us a clue.
790
+ Let's check `docker ps`:
791
+
792
+ [subs="specialcharacters,quotes"]
793
+ ----
794
+ $ *docker ps*
795
+ CONTAINER ID IMAGE COMMAND CREATED STATUS
796
+ PORTS NAMES
797
+ 1dd87cbfa874 superlists "/bin/sh -c 'gunicor…" 9 minutes ago Up 9
798
+ minutes superlists
799
+ ----
800
+
801
+ This might be ringing a bell now--we forgot the ports.
802
+
803
+ We want to expose port 8888 inside the container as port 80 (the default web/http port)
804
+ on the server:
709
805
710
806
[role="sourcecode"]
711
807
.infra/ansible-provision.yaml (ch11l005)
@@ -723,45 +819,119 @@ show ssh, curl localhosts maybe.
723
819
----
724
820
====
725
821
822
+ That gets us to
726
823
727
- ////
728
- ==== Making Sure Our Container Starts on Boot
729
-
730
- ((("Container", "automatic booting/reloading of")))
731
- Our final step is to make sure
732
- that the server starts up our container automatically on boot,
733
- and reloads it automatically if it crashes.
734
-
735
- (used to need systemd, now you can just set restart_policy.
736
- ////
824
+ ----
825
+ selenium.common.exceptions.NoSuchElementException: Message: Unable to locate
826
+ element: [id="id_list_table"]; [...]
827
+ ----
737
828
738
829
739
830
=== Mounting the database on the server and running migrations
740
831
741
- todo show test output and/or error page
832
+ Taking a look at the logs from the server,
833
+ we can see that the database is not initialised.
742
834
743
835
[subs="specialcharacters,quotes"]
744
836
----
745
- *ssh elspeth@staging.ottg.co.uk docker logs superlists*
837
+ $ *ssh elspeth@server docker logs superlists*
746
838
[...]
747
839
django.db.utils.OperationalError: no such table: lists_list
748
840
----
749
841
750
- todo add db.sqlite mount and migrate
842
+
843
+ [subs="specialcharacters,quotes"]
844
+ ----
845
+ $ *ansible-playbook --user=elspeth -i staging.ottg.co.uk, infra/ansible-provision.yaml -v*
846
+ [...]
847
+ TASK [Run migration inside container] *****************************************
848
+ changed: [staging.ottg.co.uk] => {"changed": true, "rc": 0, "stderr": "",
849
+ "stderr_lines": [], "stdout": "Operations to perform:\n Apply all migrations:
850
+ auth, contenttypes, lists, sessions\nRunning migrations:\n Applying
851
+ contenttypes.0001_initial... OK\n Applying
852
+ contenttypes.0002_remove_content_type_name... OK\n Applying
853
+ auth.0001_initial... OK\n Applying
854
+ auth.0002_alter_permission_name_max_length... OK\n Applying
855
+ [...]
856
+ PLAY RECAP ********************************************************************
857
+ staging.ottg.co.uk : ok=9 changed=2 unreachable=0 failed=0
858
+ skipped=0 rescued=0 ignored=0
859
+ ----
860
+
861
+
862
+ Here's how
863
+
864
+ [role="sourcecode"]
865
+ .infra/ansible-provision.yaml (ch11l006)
866
+ ====
867
+ [source,python]
868
+ ----
869
+ - name: Ensure db.sqlite3 file exists outside container
870
+ ansible.builtin.file:
871
+ path: /home/elspeth/db.sqlite3
872
+ state: touch # <1>
873
+
874
+ - name: Run container
875
+ community.docker.docker_container:
876
+ name: superlists
877
+ image: superlists
878
+ state: started
879
+ recreate: true
880
+ env_file: ~/superlists.env
881
+ mounts: # <2>
882
+ - type: bind
883
+ source: /home/elspeth/db.sqlite3
884
+ target: /src/db.sqlite3
885
+ ports: 80:8888
886
+
887
+ - name: Run migration inside container
888
+ community.docker.docker_container_exec: # <4>
889
+ container: superlists
890
+ command: ./manage.py migrate
891
+ ----
892
+ ====
893
+
894
+ <1> We use `file` with `state=touch` to make sure a placeholder file exists
895
+ before we try and mount it in
896
+
897
+ <2> Here is the `mounts` config, which works a lot like the `--mount` flag to `docker run`.
898
+
899
+ <3> We use the `expanduser` filter to get the absolute path,
900
+ otherwise docker will complain about the `~`.
901
+
902
+ <2> And we use the API for `docker exec` to run the migration command inside
903
+ the container.
904
+
751
905
752
906
753
907
=== It workssss
754
908
755
- hooray
909
+ Hooray
756
910
757
911
[role="small-code"]
758
912
[subs="specialcharacters,macros"]
759
913
----
760
914
$ pass:quotes[*TEST_SERVER=staging.ottg.co.uk python manage.py test functional_tests*]
915
+ Found 3 test(s).
761
916
[...]
917
+
918
+ ...
919
+ ---------------------------------------------------------------------
920
+ Ran 3 tests in 13.537s
762
921
OK
763
922
----
764
923
924
+ ////
925
+ ==== Making Sure Our Container Starts on Boot
926
+
927
+ ((("Container", "automatic booting/reloading of")))
928
+ Our final step is to make sure
929
+ that the server starts up our container automatically on boot,
930
+ and reloads it automatically if it crashes.
931
+
932
+ (used to need systemd, now you can just set restart_policy.
933
+ ////
934
+
765
935
766
936
.More Debugging Tips and Commands
767
937
*******************************************************************************
@@ -812,6 +982,7 @@ like yours.((("", startref="ansible29")))
812
982
Deploying to Live
813
983
^^^^^^^^^^^^^^^^^
814
984
985
+ TODO update this
815
986
816
987
So, let's try using it for our live site!
817
988
@@ -897,16 +1068,28 @@ Here are some resources I used for inspiration:
897
1068
.Automated Deployments
898
1069
*******************************************************************************
899
1070
1071
+ TODO Maybe recap the key steps of any deployment:
1072
+
1073
+ - installing docker (assuming that's the only system dep)
1074
+ - getting our image onto the server (normally just with docker push/pull)
1075
+ - setting env vars & secrets
1076
+ - attaching a database (a mounted file in our case)
1077
+ - configuring port
1078
+ - running migrations
1079
+ - and running or re-running the container
1080
+
1081
+ old content follows:
1082
+
900
1083
Idempotency::
901
- If your deployment script is deploying to existing servers, you need to
902
- design them so that they work against a fresh installation 'and' against
1084
+ If your deployment script is deploying to existing servers,
1085
+ you need to design them so that they work against a fresh installation 'and' against
903
1086
a server that's already configured.
904
1087
((("idempotency")))
905
1088
906
1089
Automating provisioning::
907
- Ultimately, _everything_ should be automated, and that includes spinning up
908
- brand new servers and ensuring they have all the right software installed .
909
- This will involve interacting with the API of your hosting provider.
1090
+ Ultimately, _everything_ should be automated, and that includes spinning up
1091
+ brand new servers.
1092
+ This will involve interacting with the API of your hosting provider.
910
1093
911
1094
Security::
912
1095
A serious discussion of server security is beyond the scope of this book,
0 commit comments