Migrating MFP 8.0 to ROCP (Fresh install)
This topic is meant for existing IBM MobileFirst Foundation 8.0 with Red Hat® OpenShift® 4.14 that want to migrate to the PMF 9.1.
The migration type is fresh install and existing MFP 8.0 is not affected.
This topic does not cover the following scenarios.
- On-prem to On-prem or any other migration scenarios.
- Db2 database migration.
- Red Hat® OpenShift® cluster set up.
- Any changes in the mobile applications resulting due to changes in infrastructure.
Prerequisites
Before you begin the migration process, ensure the following.
- You have not migrated Db2 database.
- You are using this same Db2 database for the PMF 9.1.
- You have set up a Red Hat OpenShift cluster with right number of nodes having sufficient memory and vCPUs.
- PMF Version 9.1 requires Red Hat OpenShift cluster Version 4.15.22.
- You are using
PMF-OpenShift-Pak-<version>. tar.gz
(Provided by Persistent Systems) for PMF 9.1. - Your mobile client applications (Android, iOS) and IBM MobileFirst Foundation 8.0 adapters are Java™ 17 compatible.
Note: Client application is required to be rebuilt and republished if the host and port is changed for the MFP Server. To avoid such changes in the client application, it is recommended that client applications use domain name instead of any fixed IP to connect to MFP backend service and after the migration point the domain name to the PMF backend service.
Procedure
Proceed as follows.
-
Update the product version in the database by following queries.
a. UPDATE SERVER_VERSION SET SERVER_VERSION=’9.1.0’;
b. UPDATE MFPADMIN_VERSION SET MFPADMIN_VERSION=’9.1.0’;
c. UPDATE APPCNTR_VERSION SET APPCNTR_VERSION = ‘9.1.0’;
Note: You must update #a and #b. Update #c if appcenter component from IMF 8.0.0 is used.
-
Log in to MFP 8.0.0 cluster by using OC login command and using correct project namespace.
-
Get list of all the secrets created in the existing MFP 8.0.0 cluster setup by using following command.
oc get secrets --namespace=<NAMESPACE>
-
Compare the list of secrets fetched with following listed secrets.
builder-dockercfg-* builder-token-* default-dockercfg-* default-token-* deployer-dockercfg-* deployer-token-* mf-operator-dockercfg-* mf-operator-token-* ibm-mf-mfpliveupdate-clientsecret ibm-mf-pushclientsecret ibm-mf-serveradminclientsecret sh.helm.release.v1.ibm-mf.v1
-
Export the secrets that are different by using the following command.
oc get secret <secret-name> -n <source-namespace> -o yaml > <secret-name>.yml
Note: Repeat the command to export each secret except image pull secret as it will be newly created for PMF 9.1 setup.
-
Get list of all the configmaps created in existing MFP 8.0.0 cluster setup by using the following command.
oc get configmap - -namespace=<NAMESPACE>
-
Compare the list of configmaps with the following listed configmaps.
ibm-mf-appcenter-configmap ibm-mf-liveupdate-configmap ibm-mf-push-configmap ibm-mf-server-configmap kube-root-ca.crt openshift-service-ca.crt
-
Export the configmaps that are different by using the following command.
oc get configmap <configmap-name> -n <source-namespace> -o yaml > <configmap-name>.yml
Note: Repeat the command to export each configMap.
-
(Optional) To change namespace name while migrating from MFP 8.0.0 to PMF 9.1, proceed as follows.
a. Edit all the exported secrets and configmaps yaml files to change namespace to the desired namespace.
-
Pull images by using either of the following methods as per your requirement.
- Persistent Systems provided public docker hub registry
- Own private Docker Hub registry
Ensure that you are logged in to the PMF 9.1 cluster using OC login command and using correct project namespace.
-
To pull images using Persistent Systems provided public docker hub registry, follow these steps.
a. Create image pull secret using below command,
oc create secret docker-registry -n <NAMESPACE> <PULLSECRET-NAME> --docker-server=<REGISTRY> --docker-username=<USERNAME> --docker-password=<PASSWORD>
You can specify the
NAMESPACE
andPULLSECRET-NAME
, but specify the values provided by Persistent System forREGISTRY
,USERNAME
andPASSWORD
.Also ensure that you have already created a namespace in the PMF 9.1 cluster. If not, then create by using the command.
oc new-project <destination-namespace>
b. If using MFP push notification services, then follow this step to use the new fcm v1 api. Create a new custom
configMap
by using the following command. The file name should befcm-v1-firebase.json
file.oc create configmap <configmap-name> --from-file= fcm-v1-firebase.json
c. Create a secret for each exported secret (Step 3) by using the following command.
oc apply -f <secret-name>.yml -n <destination-namespace>
d. Create a configMap for each exported configMap by using the following command.
oc apply -f <configMap-name >.yml -n <destination-namespace>
Note: Ensure Step c and d is completed for all the exported secrets and configMaps.
e. Extract
PMF-OpenShift-Pak-<version>. tar.gz
package by using the following command.tar -xvzf PMF-Openshift-Pak-<VERSION>.tar.gz
f. Go inside
deploy
directory of the extracted package and modify following deployment YAML files as stated.-
service_account.yaml
- provide created imagePullSecret. -
role_binding.yaml
- provide namespace name. -
charts_v1_esoperator_cr.yaml
- provide all necessary details and modify following values. -
Enter hostname as per PMF documentation. Change the hostname as per the configured ingress.
ingress: hostname:
-
Change the value of ‘enabled’ to false.
dbinit: enabled: false
-
Change the following values for all the components according to the existing MFP 8.0.0 set up.
db: type: "db2" host: "" port: "" name: "" secret: "" schema: "" ssl: false sslTrustStoreSecret : " driverPvc: "" adminCredentialsSecret: ""
-
Change below values for all components according to existing MFP 8.0.0 set up
resources: requests: cpu: memory: limits: cpu: memory:
-
Change below values for all components according to existing MFP 8.0.0 set up
replicas: autoscaling: enabled: false min: max: targetcpu:
-
Change below values for all components according to existing MFP 8.0.0 set up
tolerations: enabled: false key: "dedicated" operator: "Equal" value: "ibm-mf-liveupdate" effect: "NoSchedule"
-
Change below values for all components according to existing MFP 8.0.0 set up
pdb: enabled: true min: 1
-
Change below values for all components according to existing MFP 8.0.0 set up
keystoreSecret: ""
-
Change below values for all applicable components according to existing MFP 8.0.0 set up
pullSecret: "” consoleSecret: ""
-
Ensure to update below values according to existing MFP 8.0.0 set up
adminClientSecret: "" pushClientSecret: "" liveupdateClientSecret: "" analyticsClientSecret: "" receiverClientSecret: "" internalClientSecretDetails: adminClientSecretId: mfpadmin adminClientSecretPassword: nimdapfm pushClientSecretId: push pushClientSecretPassword: hsup
Note:
- Ensure to update keystoreSecret secrets if you have any in MFP 8.0.0 to PMF 9.1 as well.
-
-
Take reference from existing MFP 8.0.0 deployment yaml files to verify ingress hostname, component uses, db details, secrets, replicas, autoscaling, resource limit and custom configmaps…and so on.
-
-
To pull images using own private Docker Hub registry, follow these steps.
a. Complete steps#b-e from the above Persistent Systems provided public docker hub registry section.
b. Go to image directory and push image to private docker hub registry using command,
for i in <IMAGE-NAME>.tar.gz; do docker load -i $i;done
c. Complete steps#f from the above Persistent Systems provided public docker hub registry section.
-
Go to
deploy
directory and apply the following commands.oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:<NAMESPACE>:mf-operator oc adm policy add-scc-to-group mf-operator system:serviceaccounts:<NAMESPACE> oc create -f crds/charts_v1_mfoperator_crd.yaml --namespace=<NAMESPACE> oc create -f . --namespace=<NAMESPACE>
Wait for the operator pod to be up and running. Check the operator pod status by following command.
oc get pods
oc apply -f crds/charts_v1_mfoperator_cr.yaml --namespace=<NAMESPACE>
Wait for all PMF pods to be up and running. Check the pod status by following command.
oc get pods
-
Verify that the MFP console is up and running by using the following URLs.
<<protocol>>://<<hostname>>/mfpconsole