You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: user/enterprise/platform-tips.md
+4-35Lines changed: 4 additions & 35 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,6 @@ In order to obtain live logs from a specific running pod, one can run *on your l
24
24
25
25
> We strongly recommend setting up an instance of grabbing live logs from pods stdout and storing them in the logging storage of your choice. These stored logs can be useful when diagnosing or troubleshooting situations for pods that were killed and/or re-deployed. The size of the logs depends strictly on your usage, thus please adjust to your needs. As a rule of thumb, a 4-weeks of log storage would be recommended.
26
26
27
-
28
27
### Worker logs
29
28
30
29
This section describes how to obtain worker logs with Ubuntu as the host operating system.
@@ -47,20 +46,17 @@ On the Worker, you can find the main log file at
47
46
### Console access in TCIE 3.x
48
47
49
48
For TCIE 3.x, you gain access to individual pods through the `kubectl` command (The equivalent to `travis bash` in Enterprise 2.x versions)
50
-
In order to run console commands, run console in `travis-api-pod`:
49
+
In order to run console commands, run the console in `travis-api-pod`:
: Are you trying to mount a directory onto a file (or vice-versa)? Check
125
-
if the specified host path exists and is the expected type
126
-
```
127
-
128
-
To address this, remove the RabbitMQ cert from `/etc/travis/ssl/`:
129
-
130
-
```
131
-
$ sudo rm -r /etc/travis/ssl/rabbitmq.cert
132
-
```
133
-
After this, do a full reboot of the system and everything should start again properly.
134
-
135
104
136
105
## View Sidekiq Queue Statistics
137
106
138
-
In the past there have been reported cases where the system became unresponsive. It took quite a while until jobs where worked off or they weren't picked up at all. We found out that often full Sidekiq queues played a part in this. To get some insight, it helps to retrieve some basics statistics in the Ruby console:
107
+
In the past, there have been reported cases where the system became unresponsive. It took quite a while until jobs were worked off, or they weren't picked up at all. We found out that full Sidekiq queues often played a part in this. To get some insight, it helps to retrieve some basic statistics in the Ruby console:
139
108
140
109
**TCIE 3.x**: `kubectl exec -it [travis-api-pod]j /app/script/console`*on your local machine*
0 commit comments