MFP Cloud 8.0 to PMF Cloud 9.1 on Red Hat OpenShift (FreshInstall)

AUDIENCES

This document can be referred by customers who run their IBM MobileFirst Foundation 8.0 on OpenShift 4.14 or above and wants to migrate it Persistent Mobile Foundation (PMF) 9.1

SCOPE

The document describes the steps that need to be performed to migrate IBM MobileFirst Foundation 8.0 running on OpenShift 4.14 or above to Persistent Mobile Foundation 9.1

OUT OF SCOPE

The document doesn’t cover OnPrem or any other migration scenarios. The document doesn’t cover database migration The document doesn’t cover any OpenShift cluster set up

MIGRATION TYPE

The migration is a fresh install. Existing IBM MobileFirst Foundation 8.0 set up on OpenShift is not affected.

ASSUMPTIONS

  • Database is not migrated. The same database is used for Persistent Mobile Foundation 9.1
  • Customers should set up a OpenShift cluster having the same configuration as the existing IBM MobileFirst foundation 8.0 OpenShift cluster
  • Recommended OpenShift cluster version for Persistent Mobile Foundation (PMF) 9.1 is 4.15.22
  • Customers should use Persistent Systems provided deployment package for Persistent Mobile Foundation 9.1
  • Customers need to take care of changes in mobile client applications (Android, iOS) due to the changes in hostname/ingress URL as per the new Persistent Mobile Foundation (PMF) 9.1 set up
  • Customers need to ensure that their mobile client applications (Android, iOS) and IBM MobileFirst Foundation 8.0 adapters are Java 17 compatible

PREREQUESITE

Ensure to have completed below prerequisites before proceeding further 1. DB2 DATABASE SETUP (Existing database) 2. OCP CLUSTER 4.15 (A new cluster) 3. PMF-OpenShift-Pak-. Tar.gz (Provided by Persistent Systems)

IMPORTANT NOTES: Client application is required to be rebuilt and republished if PMF hostname is changed.

MIGRATION STEPS

Update product version into database using below queries to migrate db2 database from MFP 8.0.0 to PMF 9.1

A. UPDATE SERVER_VERSION SET SERVER_VERSION=’9.1.0’;

B. UPDATE MFPADMIN_VERSION SET MFPADMIN_VERSION=’9.1.0’;

C. UPDATE APPCNTR_VERSION SET APPCNTR_VERSION = ‘9.1.0’;

NOTE: Point A and B are mandatory, point C is only required if appcenter component from MFP 8.0.0 is used.

Prerequisite steps for deployment

Before executing below steps A and B, ensure that you have ‘OC’ CLI installed, and you are logged in to MFP 8.0.0 cluster using OC login command and using correct project namespace. (Steps A & B need to be performed against MFP 8.0.0 cluster)

A.

a. Get list of all secrets created in existing MFP 8.0.0 cluster setup using below command.

     oc get secrets  --namespace=<NAMESPACE>

b. Compare the list of secrets from point a with below listed secrets.

     builder-dockercfg-* 
     builder-token-* 
     default-dockercfg-* 
     default-token-* 
     deployer-dockercfg-* 
     deployer-token-*
     mf-operator-dockercfg-* 
     mf-operator-token-* 
     ibm-mf-mfpliveupdate-clientsecret 
     ibm-mf-pushclientsecret 
     ibm-mf-serveradminclientsecret 
     sh.helm.release.v1.ibm-mf.v1

c. The difference secrets between point a and b (discard the secrets that are common between point a and point b) should be exported using below command -

   oc get secret <secret-name> -n <source-namespace> -o yaml > <secret-name>.yml

NOTE: Repeat above command to export each secret from point c except image pull secret as it will be newly created for PMF 9.1 setup.

B.

a. Get list of all configmaps created in existing MFP 8.0.0 cluster setup using below command

    oc get configmap - -namespace=<NAMESPACE>

b. Compare the list of configmaps from point a with below listed configmaps.

     ibm-mf-appcenter-configmap
     ibm-mf-liveupdate-configmap
     ibm-mf-push-configmap
     ibm-mf-server-configmap
     kube-root-ca.crt
     openshift-service-ca.crt

c. The difference configmaps between point a and b (discard the configmaps that are common between point a and point b) should be exported using below command -

     oc get configmap <configmap-name> -n <source-namespace> -o yaml > <configmap-name>.yml

NOTE: Repeat above command to export each configmap from point c.

C. To change namespace name while migrating from MFP 8.0.0 to PMF 9.1, please follow these steps (DO THIS STEP only if you want to use a namespace different than MFP 8.0.0) a. Edit all the exported secrets and configmaps yaml files to change namespace to the desired namespace.

NOTE: Follow step either D or E based on your setup requirements.

Before executing below steps D and E, ensure that you are logged in to PMF 9.1 cluster using OC login command and using correct project namespace. (Steps D & E need to be performed against PMF 9.1 cluster)

D. To pull images using Persistent Systems provided public docker hub registry, please follow these steps,

a. Create image pull secret using below command,

oc create secret docker-registry -n <NAMESPACE> <PULLSECRET-NAME> --docker-server=<REGISTRY> --docker-username=<USERNAME> --docker-password=<PASSWORD>

NOTE: NAMESPACE and PULLSECRET-NAME decided by customer and REGISTRY, USERNAME and PASSWORD provided by Persistent Systems. Also ensure that you have already created a namespace in the PMF 9.1 cluster. If not, then create it using the command: oc new-project

b. If using MFP push notification services, then follow the below step to use the new fcm v1 api

 Create a new custom configmap using below command,
     oc create configmap <configmap-name> --from-file= fcm-v1-firebase.json

NOTE: File name should be fcm-v1-firebase.json. Follow PMF documentation to see how to generate fcm-v1-firebase.json file

c. Create a secret for each exported secret using below command (check point c of A to see the list of all exported secrets)

oc apply -f <secret-name>.yml -n <destination-namespace>

d. Create a configmap for each exported configmap using below command (check point c of B to see the list of all exported configmaps)

oc apply -f <configmap-name >.yml -n <destination-namespace>

NOTE: Ensure above step b and c completed for all exported secrets and configmaps.

e. Extract the Persistent Systems provided package PMF-OpenShift-Pak-. tar.gz

f. Go inside deploy directory of the above extracted package and modify following deployment yml files as stated

  • service_account.yaml (provide created imagePullSecret name (check point a of D for a pull secret name))
  • role_binding.yaml (provide namespace name (PMF 9.1 namespace))
  • charts_v1_mfoperator_cr. Yaml, Modify followings values:

#1. Change hostname as per PMF 9.1

    ingress:
      hostname:

#2. Change ‘enabled’ to false

    dbinit:
      enabled: false

#3. Change below values for all components according to existing MFP 8.0.0 set up

    db:
      type: "db2"
      host: ""
      port: ""
      name: ""
      secret: ""
      schema: ""
      ssl: false
      sslTrustStoreSecret : "
      driverPvc: ""
      adminCredentialsSecret: ""

#4. Change below values for all components according to existing MFP 8.0.0 set up

    resources:
      requests:
        cpu: 
        memory: 
      limits:
        cpu: 
        memory: 

#5. Change below values for all components according to existing MFP 8.0.0 set up

    replicas: 
    autoscaling:
      enabled: false
      min: 
      max: 
      targetcpu: 

#6. Change below values for all components according to existing MFP 8.0.0 set up

    tolerations:
      enabled: false
      key: "dedicated"
      operator: "Equal"
      value: "ibm-mf-liveupdate"
      effect: "NoSchedule"

#7. Change below values for all components according to existing MFP 8.0.0 set up

    pdb:
      enabled: true
      min: 1

#8. Change below values for all components according to existing MFP 8.0.0 set up

              keystoreSecret: ""

#9. Change below values for all applicable components according to existing MFP 8.0.0 set up

      pullSecret: "”

      consoleSecret: ""

#10. Ensure to update below values according to existing MFP 8.0.0 set up

    adminClientSecret: ""
    pushClientSecret: ""
    liveupdateClientSecret: ""
    analyticsClientSecret: ""
    receiverClientSecret: ""
    internalClientSecretDetails:
      adminClientSecretId: mfpadmin
      adminClientSecretPassword: nimdapfm
      pushClientSecretId: push
      pushClientSecretPassword: hsup

NOTE: Ensure to update keystoreSecret secrets if you have any in MFP 8.0.0 to PMF 9.1 as well, follow PMF documentation to see how to update keystoreSecret

NOTE: Take reference from existing MFP 8.0.0 deployment yaml files to verify ingress hostname, component uses, db details, secrets, replicas, autoscaling, resource limit and custom configmaps…etc.

E. To pull images using your own private Docker Hub registry, please follow these steps,

a. Extract the persistent system provided package PMF-OpenShift-Pak-.tar.gz

b. If using MFP push notification services, then follow below steps to use the new fcm v1 api

 Create a new custom configmap using below command, ```bash
 oc create configmap <configmap-name> --from-file=fcm-v1-firebase.json ```     

NOTE: File name should be fcm-*v1-firebase.json

c. Create a secret for each exported secret using below command (check point c of A to see the list of all exported secrets)

oc apply -f <secret-name>.yml -n <destination-namespace>

d. Create a configmap for each exported configmap using below command (check point c of B to see the list of all exported configmaps)

oc apply -f <configmap-name>.yml -n <destination-namespace>

NOTE: Ensure above step b and c completed for all exported secrets and configmaps.

e. Go to image directory and push image to private docker hub registry using command,

     for i in <IMAGE-NAME>.tar.gz; do    docker load -i $i;done

(Refer PMF documentation on how to push images to private docker repository for details)

f. Go inside deploy directory of the above extracted package and modify following deployment yml files as stated

  • service_account.yaml (provide created imagePullSecret name (check point a of D for a pull secret name))
  • role_binding.yaml (provide namespace name (PMF 9.1 namespace))
  • charts_v1_mfoperator_cr. Yaml, Modify followings values:

#1. Change hostname as per PMF 9.1

    ingress:
      hostname:

#2. Change ‘enabled’ to false

    dbinit:
      enabled: false

#3. Change below values for all components according to existing MFP 8.0.0 set up

    db:
      type: "db2"
      host: ""
      port: ""
      name: ""
      secret: ""
      schema: ""
      ssl: false
      sslTrustStoreSecret : "
      driverPvc: ""
      adminCredentialsSecret: ""

#4. Change below values for all components according to existing MFP 8.0.0 set up

    resources:
      requests:
        cpu: 
        memory: 
      limits:
        cpu: 
        memory: 

#5. Change below values for all components according to existing MFP 8.0.0 set up

    replicas: 
    autoscaling:
      enabled: false
      min: 
      max: 
      targetcpu: 

#6. Change below values for all components according to existing MFP 8.0.0 set up

    tolerations:
      enabled: false
      key: "dedicated"
      operator: "Equal"
      value: "ibm-mf-liveupdate"
      effect: "NoSchedule"

#7. Change below values for all components according to existing MFP 8.0.0 set up

    pdb:
      enabled: true
      min: 1

#8. Change below values for all components according to existing MFP 8.0.0 set up

              keystoreSecret: ""

NOTE: Ensure to update keystoreSecret secrets if you have any in MFP 8.0.0 to PMF 9.1 as well, follow PMF documentation to see how to update keystoreSecret

NOTE: Take reference from existing MFP 8.0.0 deployment yaml files to verify ingress hostname, component uses, db details, secrets, replicas, autoscaling, resource limit and custom configmaps…etc.

DEPLOYMENT

A. Go to deploy directory and apply below commands,

oc adm policy add-cluster-role-to-user cluster-admin   system:serviceaccount:<NAMESPACE>:mf-operator
                oc adm policy add-scc-to-group mf-operator system:serviceaccounts:<NAMESPACE>
                oc create -f crds/charts_v1_mfoperator_crd.yaml --namespace=<NAMESPACE>
                oc create -f . --namespace=<NAMESPACE>

NOTE: Wait for operator pod to come up with running status. To check operator pod status, use command: oc get pods oc apply -f crds/charts_v1_mfoperator_cr.yaml –namespace= **NOTE**: Wait for all PMF pods to come up with running status. To check pod’s status, use command: oc get pods

TESTING

After the deployment, verify console URL is up and running using: «protocol»://«hostname»/mfpconsole

Last modified on