-
Notifications
You must be signed in to change notification settings - Fork 283
DOC-4753 added RDI upgrade guide #1089
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
dwdougherty
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one minor recommendation. Otherwise, LGTM.
|
|
||
| ### Recovering from failure during a Kubernetes upgrade | ||
|
|
||
| If you get an error during the upgrade or some deployments are not OK, then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If OK here is referring to a state shown in a command's output, then using OK is okay. :) Otherwise, use okay. This is not a Google style guide issue per se (it uses both OK and okay to mean the latter), but I've seen this rule on other style guides. Up to you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andy-stark-redis maybe we should add:
Run the command sudo k3s kubectl get all -n <namespace> and verify that all the pods are running and that the READY column for all the pods is 1/1. E.g. fornot okstate:
<pod_name> 0/1 CrashLoopBackOff 1881 (91s ago) 6d17h
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. (I've combined this section with the one above about verifying the installation, btw, since the recovery info is quite short and straightforward.)
|
|
||
| ### Recovering from failure during a VM upgrade | ||
|
|
||
| If the previous version is v1.4.4 or later, go to the `rdi_install/<NEW_VERSION>` directory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my mistake, it's not the NEW_VERSION, it's the previous version
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
| [deploy]({{< relref "/integrate/redis-data-integration/data-pipelines/deploy" >}}) | ||
| again after this step. | ||
|
|
||
| 1. Download the latest `redis-di`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yaronp68 can we remove it? They don't need it for the k8s installation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, let's remove it. Also, they cannot just get the latest CLI, it's part of the installation package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andy-stark-redis please remove line 109
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
| docker pull redis/rdi-operator:tagname | ||
| docker pull redis/rdi-api:tagname | ||
| docker pull redis/rdi-monitor:tagname | ||
| docker pull redis/rdi-collector-initializer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yaronp68 what about the collector-api?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we should also include it. It will not be an optional component anymore, so let's just add it here.
@galilev , please provide the command so that @andy-stark-redis can add it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andy-stark-redis please add docker pull redis/rdi-collector-api
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
| - integrate | ||
| - rs | ||
| - rdi | ||
| description: Learn how to install RDI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| description: Learn how to install RDI | |
| description: Learn how to install and upgrade RDI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
| ### Recovering from failure during a VM upgrade | ||
|
|
||
| If the previous version is v1.4.4 or later, go to the `rdi_install/<NEW_VERSION>` directory | ||
| and run `sudo redis-di upgrade`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@galilev If the previous version is higher than 1.4.4, shouldn't we be using the upgrade.sh script instead of the CLI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ZdravkoDonev-redis yes, I added it to the original document that I sent to @andy-stark-redis - please see https://redislabs.atlassian.net/wiki/spaces/RED/pages/4856348718/Upgrading+RDI+GA+draft#Recovering-from-failure-in-VM-upgrade
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
|
|
||
| ### Upgrading a VM installation with High availability | ||
|
|
||
| If there is an active pipeline, the upgrade process will involve upgrading RDI on the active |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@galilev Why upgrading first the active version? Wouldn't you have 2 downtimes this way?
Scenario 1 - upgrade first the active RDI (RDI_instance_1) then the passive RDI (RDI_instance_2):
- Upgrade the Active RDI (RDI_instance_1) ---> Downtime, the passive RDI (RDI_instance_2) becomes the active one, since the active is now down
- Upgrade the newly active RDI (RDI_instance_2) ---> Downtime, the newly passive RDI (RDI_instance_1) becomes the active one, since the newly active RDI is down
Scenario 2 - upgrade first the passive RDI (RDI_instance_2) then the passive RDI (RDI_instance_1):
- Upgrade the passive RDI (RDI_instance_2) ---> no downtime, upgrade succeeds
- Upgrade the Active RDI (RDI_instance_1) ---> Downtime, the passive RDI (RDI_instance_2) becomes the active one, since the active is now down
Or am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ZdravkoDonev-redis The upgrade won't trigger a switchover. We plan to upgrade the active RDI first, followed by the passive. @yaronp68 asked whether switching to RDI instance 2 and then upgrading RDI instance 1 would result in zero downtime. However, since a switchover itself requires downtime, this approach wouldn’t be beneficial
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay
DOC-4753 (based on this Confluence page).
Is there any more to add to this before merging? Also, the K8s installation section says that you should download
redis-di- can you actually download this separately?