Migrating MFP 8.0 to ROCP (Existing)
This topic is meant for existing IBM MobileFirst Foundation 8.0 with Red Hat® OpenShift® 4.14 that want to migrate to the PMF 9.1.
The migration type is upgrade install and existing MFP 8.0 is upgraded to new PMF 9.1 on Red Hat OpenShift.
This topic does not cover the following scenarios.
- On-prem to On-prem or any other migration scenarios.
- Db2 database migration.
- Red Hat® OpenShift® cluster set up.
- Any changes in the mobile applications resulting due to changes in infrastructure.
Prerequisites
Before you begin the migration process, ensure the following.
- You have not migrated Db2 database.
- You are using this same Db2 database for the PMF 9.1.
- You have set up a Red Hat OpenShift cluster with right number of nodes having sufficient memory and vCPUs.
- PMF Version 9.1 requires Red Hat OpenShift cluster Version 4.15.22.
- You are using
PMF-OpenShift-Pak-<version>. tar.gz
(Provided by Persistent Systems) for PMF 9.1. - Your mobile client applications (Android, iOS) and IBM MobileFirst Foundation 8.0 adapters are Java™ 17 compatible.
- If you have used Analytics component in the MFP 8.0, take backup of analytics data. For more information, see Migrate analytics data from OnPrem MFP 8.0 to PMF Cloud 9.1 on Kubernetes.
- You have to take care of changes (host and port changes) in the mobile client applications (Android, iOS) due to the changes in the domain URL as per the latest PMF Version 9.1 set up.
- PMF ingress/hostname is not changed.
Note:
-
Client application is required to be rebuilt and republished if the host and port is changed for the MFP Server. To avoid such changes in the client application, it is recommended that client applications use domain name instead of any fixed IP to connect to MFP backend service and after the migration point the domain name to the PMF backend service.
-
It is not mandatory to migrate Client SDKs to newer versions for PMF 9.1, you are encouraged to upgrade their existing Client SDKs to newer SDK versions to get latest security related enhancements in the SDK code. Customer can refer the links below to migrate their existing Client SDKs.
Important: Do not change MFP/PMF hostname.
Procedure
Proceed as follows.
-
Log in to the MFP 8.0.0 cluster using OC login command and using correct project namespace.
a. Pull images using Persistent Systems provided public docker hub registry by creating image pull secret using below command.
oc create secret docker-registry -n <NAMESPACE> pmf-image-pullsecret --docker-server=<REGISTRY> --docker-username=<USERNAME> --docker-password=<PASSWORD>
Specify the values provided by Persistent System for
REGISTRY
,USERNAME
andPASSWORD
. Keep the NAMESPACE same as MFP 8.0 namespace. Thepmf-image-pullsecret
is the pull secret name. If you want, you can choose any other pull secret name.b. Extract
PMF-OpenShift-Pak-<version>. tar.gz
package by using the following command.tar -xvzf PMF-Openshift-Pak-<VERSION>.tar.gz
c. Go inside
deploy
directory of the extracted package and modify following deployment YAML files as stated.service_account.yaml
- provide imagePullSecret.role_binding.yaml
- provide namespace name (as perMFP 8.0 charts_v1_mfoperator_cr.yaml
).charts_v1_esoperator_cr.yaml
- provide all necessary details and modify following values.-
Enter hostname as per PMF documentation. Change the hostname as per the configured ingress.
ingress: hostname:
-
d. Go inside
crds
directory. Copy the values from the oldcharts_v1_mfoperator_cr.yaml
in the MFP 8.0 to the newcharts_v1_mfoperator_cr.yaml
file in the PMF 9.1These values include ingress hostname, db details, db secret and other custom configurations) Refer below for details:
-
Enable or disable components as per existing MFP 8.0.0 set up.
mfpserver: enabled: true ---- mfppush: enabled: true ---- mfpliveupdate: enabled: true ---- mfpanalytics: enabled: true ---- mfpanalytics_recvr: enabled: true ---- mfpappcenter: enabled: true
-
Change the pullSecret (Step 1-a)
image: pullPolicy: IfNotPresent pullSecret: "pmf-image-pullsecret"
-
Change hostname as per MFP 8.0.0 setup.
ingress: hostname:
-
Ensure
enabled
property is set to the value of true.dbinit: enabled: true
-
Change the following values for all components according to existing MFP 8.0.0 set up.
db: type: "db2" host: "" port: "" name: "" secret: "" schema: "" ssl: false sslTrustStoreSecret : " driverPvc: "" adminCredentialsSecret: ""
Note: The schema name case should match the schema name case in the Db2 database. Do not simply copy it from the old CR yaml file.
-
Change the following values for all the components according to existing MFP 8.0.0 set up.
resources: requests: cpu: memory: limits: cpu: memory:
-
Change the following values for all the components according to existing MFP 8.0.0 set up.
replicas: autoscaling: enabled: false min: max: targetcpu:
-
Change the following values for all the components according to existing MFP 8.0.0 set up.
tolerations: enabled: false key: "dedicated" operator: "Equal" value: "ibm-mf-liveupdate" effect: "NoSchedule"
-
Change the following values for all components according to existing MFP 8.0.0 set up.
pdb: enabled: true min: 1
-
Change the following values for all components according to existing MFP 8.0.0 set up.
keystoreSecret: ""
-
Change the following values for all applicable components according to existing MFP 8.0.0 set up.
consoleSecret: ""
-
Ensure to update the following values according to existing MFP 8.0.0 set up.
adminClientSecret: "" pushClientSecret: "" liveupdateClientSecret: "" analyticsClientSecret: "" receiverClientSecret: "" internalClientSecretDetails: adminClientSecretId: mfpadmin adminClientSecretPassword: nimdapfm pushClientSecretId: push pushClientSecretPassword: hsup
-
(Optional) Create custom configuration for the mfpanalytics component (Only if analytics component is used).
custom configuration for analytics component
oc create configmap analytics-custom-config --from-file=config.properties
For more information on
config.properties
, see Deploying PMF on existing container platform.customConfiguration: "analytics-custom-config"
-
(Optional) Create custom configuration for mfppush component (Only if push component is used). The file name should be fcm-v1-firebase.json file.
oc create configmap <configmap-name> --from-file= fcm-v1-firebase.json
-
(Optional) To pull images using your own private Docker Hub registry, see the “Migrating MFP 8.0…” topics.
-
Go to
deploy
directory and apply below commands.oc apply -f crds/charts_v1_mfoperator_crd.yaml oc apply -f service_account.yaml oc apply -f role.yaml oc apply -f role_binding.yaml oc apply -f deploy/operator.yaml
-
Run the following command to upgrade CR once PMF operator pod is up and running.
oc apply -f crds/charts_v1_mfoperator_cr.yaml
Verify that the pods reflects PMF 9.1 image.
-
Go inside
es/deploy
directory of the above extracted package and modify following deployment yml files.Note: Upgrading Elasticsearch for Mobile Foundation Analytics also follows the same steps described in this step.
service_account.yaml
- provide created imagePullSecret.role_binding.yaml
- provide namespace name as per MFP 8.0.charts_v1_esoperator_cr.yaml
- Copy the values from the oldcharts_v1_esoperator_cr.yaml
file in the MFP 8.0 to the newcharts_v1_esoperator_cr.yaml
file in the PMF 9.1.
(These values include storage classname, claim name, shards, replicas, cpu and memory limits and other configurations.)
-
Change the following values for all the components according to existing MFP 8.0.0 set up.
image: pullSecret: ""
-
Change the following values for all the applicable components according to existing MFP 8.0.0 set up.
persistence: storageClassName: " # claimName/storageClassName claimName: "" size: 20Gi shards: "3" replicasPerShard: "1" masterReplicas: 1 clientReplicas: 1 dataReplicas: 1 tolerations: enabled: false key: "dedicated" operator: "Equal" value: "ibm-es" effect: "NoSchedule" dataResources: requests: cpu: 500m memory: 1024Mi limits: cpu: 1000m memory: 10Gi resources: requests: cpu: 750m memory: 1024Mi limits: cpu: 1000m memory: 1024Mi
Note: Ensure to copy all the required values from the old CR yaml file to the new one.
-
Go to
es/deploy
directory and apply the following commands.oc apply -f crds/charts_v1_esoperator_crd.yaml oc apply -f service_account.yaml oc apply -f role.yaml oc apply -f role_binding.yaml oc apply -f operator.yaml
-
Run the following command to upgrade CR file once ES operator pod is up and running.
oc apply -f crds/charts_v1_esoperator_cr.yaml
-
Verify that the pods reflects PMF 9.1 images. If the Elasticsearch pods do not reflect PMF 9.1 images then uninstall MFP 8.0 es-operator and install PMF 9.1 es-operator.
a. To uninstall MFP 8.0 es-operator, go to the MFP 8.0
es/deploy
directory and run the following commands.oc delete -f crds/charts_v1_esoperator_cr.yaml oc delete -f service_account.yaml oc delete -f role.yaml oc delete -f role_binding.yaml oc delete -f operator.yaml oc delete -f crds/charts_v1_esoperator_crd.yaml
b. To install PMF 9.1 es-operator, go to PMF 9.1 es/deploy directory and run the following commands.
oc apply -f crds/charts_v1_esoperator_crd.yaml oc apply -f service_account.yaml oc apply -f role.yaml oc apply -f role_binding.yaml oc apply -f operator.yaml oc apply -f crds/charts_v1_esoperator_cr.yaml
c.Verify that Elasticsearch pods reflect PMF 9.1 images.
-
If Analytics component is used in the MFP 8.0 then restore the backup of analytics data in PMF 9.1. For more information, see Migrate analytics data from OnPrem MFP 8.0 to PMF Cloud 9.1 on Kubernetes.
-
Verify that the consoles are up and running by using the following URLs.
<<protocol>>://<<hostname>>/mfpconsole