MFP Cloud 8.0 to PMF Cloud 9.1 on Red Hat OpenShift (InPlace)
AUDIENCES
This document can be referred by customers who run their IBM MobileFirst Foundation 8.0 on OpenShift 4.14 or above and wants to migrate it to Persistent Mobile Foundation (PMF) 9.1
SCOPE
The document describes the steps that need to be performed to migrate IBM MobileFirst Foundation 8.0 running on OpenShift 4.14 or above to Persistent Mobile Foundation 9.1
OUT OF SCOPE
The document doesn’t cover OnPrem or any other migration scenarios. The document doesn’t cover database migration The document doesn’t cover any OpenShift cluster set up
ABOUT CLIENT SDKs AND APPLICATIONS
Though it is not mandatory to migrate Client SDKs to newer versions for PMF 9.1. However, customers are encouraged to upgrade their existing Client SDKs to newer SDK versions to get latest security related enhancements in the SDK code. Customer can refer the links below to migrate their existing Client SDKs.
NOTE: If analytics component is used in MFP 8.0 then take backup of analytics data before starting upgrade to PMF 9. To take backup of analytics data, Refer the link
MIGRATION TYPE
The migration is an upgrade install. Existing IBM MobileFirst Foundation 8.0 is upgraded to newer PMF 9.1 on OpenShift.
ASSUMPTIONS
- Database is not migrated. The same database is used for Persistent Mobile Foundation 9.1
- Customers should use Persistent Systems provided deployment package for Persistent Mobile Foundation 9.1 upgrade
- Customers need to ensure that their mobile client applications (Android, iOS) and IBM MobileFirst Foundation 8.0 adapters are Java 17 compatible
- PMF ingress/hostname is not changed
PREREQUESITE
Ensure to have completed below prerequisites before proceeding further
- Get PMF-OpenShift-Pak-
. Tar.gz package (Provided by Persistent Systems) - Take database backup
IMPORTANT NOTES: Do NOT change MFP/PMF hostname.
MIGRATION STEPS
Prerequisite steps for deployment
Before executing below steps A and B, ensure that you are logged in to MFP 8.0.0 cluster using OC login command and using correct project namespace. (Steps A & B need to be performed against MFP 8.0.0 cluster)
A.
To pull images using Persistent Systems provided docker hub registry, please follow these steps,
a. Create image pull secret using below command,
oc create secret docker-registry -n <NAMESPACE> pmf-image-pullsecret --docker-server=<REGISTRY> --docker-username=<USERNAME> --docker-password=<PASSWORD>
NOTE: USERNAME, PASSWORD and REGISTRY info provided by Persistent Systems. Keep the NAMESPACE same as MFP 8.0 namespace. “pmf-image-pullsecret” is the pull secret name. Customer can choose any other pull secret name as well.
b. Extract the Persistent Systems provided package PMF-OpenShift-Pak-
c. Go inside deploy directory of the above extracted package and modify following deployment yml files:
- service_account.yaml (Provide imagePullSecret name, Replace - REPLACE_SECRET) (check point a of A for a pull secret name)
-
role_binding.yaml (Provide namespace name, Replace - REPLACE_NAMESPACE) (as per MFP 8.0 charts_v1_mfoperator_cr.yaml)
- Go inside crds directory. Copy the values from the old charts_v1_mfoperator_cr.yaml in MFP 8.0 to the new charts_v1_mfoperator_cr.yaml in PMF 9.1
(These values include ingress hostname, db details, db secret and other custom configurations) Refer below for details:
#1.Enable/Disable components as per existing MFP 8.0.0 set up
mfpserver:
enabled: true
----
mfppush:
enabled: true
----
mfpliveupdate:
enabled: true
----
mfpanalytics:
enabled: true
----
mfpanalytics_recvr:
enabled: true
----
mfpappcenter:
enabled: true
#2. Change pullSecret (check point a of A for a pull secret name)
image:
pullPolicy: IfNotPresent
pullSecret: "pmf-image-pullsecret"
#3. Change hostname as per MFP 8.0.0
ingress:
hostname:
#4. Ensure ‘enabled’ to set to true
dbinit:
enabled: true
#5. Change below values for all components according to existing MFP 8.0.0 set up
db:
type: "db2"
host: ""
port: ""
name: ""
secret: ""
schema: ""
ssl: false
sslTrustStoreSecret : "
driverPvc: ""
adminCredentialsSecret: ""
Note: schema name case should match the schema name case in the DB2 database. Do not simply copy it from the old CR yaml file.
#6. Change below values for all components according to existing MFP 8.0.0 set up
resources:
requests:
cpu:
memory:
limits:
cpu:
memory:
#7. Change below values for all components according to existing MFP 8.0.0 set up
replicas:
autoscaling:
enabled: false
min:
max:
targetcpu:
#8. Change below values for all components according to existing MFP 8.0.0 set up
tolerations:
enabled: false
key: "dedicated"
operator: "Equal"
value: "ibm-mf-liveupdate"
effect: "NoSchedule"
#9. Change below values for all components according to existing MFP 8.0.0 set up
pdb:
enabled: true
min: 1
#10. Change below values for all components according to existing MFP 8.0.0 set up
keystoreSecret: ""
#11. Change below values for all applicable components according to existing MFP 8.0.0 set up
consoleSecret: ""
#12. Ensure to update below values according to existing MFP 8.0.0 set up
adminClientSecret: ""
pushClientSecret: ""
liveupdateClientSecret: ""
analyticsClientSecret: ""
receiverClientSecret: ""
internalClientSecretDetails:
adminClientSecretId: mfpadmin
adminClientSecretPassword: nimdapfm
pushClientSecretId: push
pushClientSecretPassword: hsup
#13. Create custom configuration for mfpanalytics component (Optional, Only if analytics component is used)
# Custom configuration for analytics component
oc create configmap analytics-custom-config --from-file=config.properties
Refer here for details on config.properties.
customConfiguration: "analytics-custom-config"
#14. Create custom configuration for mfppush component (Optional, Only if push component is used)
oc create configmap <configmap-name> --from-file= fcm-v1-firebase.json
NOTE: File name should be fcm-v1-firebase.json. Refer PMF documentation on how to generate fcm-v1-firebase.json file
B.
To pull images using your own private Docker Hub registry, Refer PMF documentation for details
DEPLOYMENT
Go to deploy directory and apply below commands
oc apply -f crds/charts_v1_mfoperator_crd.yaml
oc apply -f service_account.yaml
oc apply -f role.yaml
oc apply -f role_binding.yaml
oc apply -f deploy/operator.yaml
# Run the following command to upgrade Custom Resource once MF operator pod is up and running
oc apply -f crds/charts_v1_mfoperator_cr.yaml
Verify that the pods relects PMF 9.1 image
NOTE: Upgrading Elasticsearch for Mobile Foundation Analytics also follows the same steps described in this procedure.
Prerequisite steps for elasticsearch deployment
Go inside es/deploy directory of the above extracted package and modify following deployment yml files:
- service_account.yaml (Provide imagePullSecret name, Replace - REPLACE_SECRET) (check point a of A for a pull secret name)
- role_binding.yaml (Provide namespace name, Replace - REPLACE_NAMESPACE) (as per MFP 8.0)
- Go inside crds directory. Copy the values from the old charts_v1_esoperator_cr.yaml in IMF 8.0 to the new charts_v1_esoperator_cr.yaml in PMF 9.1
(These values include storage classname, claim name, shareds, replicas, cpu and memory limits and other configurations)
Refer below for details:
#1. Change below values for all components according to existing MFP 8.0.0 set up
image:
pullSecret: ""
#2. Change below values for all applicable components according to existing MFP 8.0.0 set up
persistence:
storageClassName: " # provide either claimName or storageClassName
claimName: ""
size: 20Gi
shards: "3"
replicasPerShard: "1"
masterReplicas: 1
clientReplicas: 1
dataReplicas: 1
tolerations:
enabled: false
key: "dedicated"
operator: "Equal"
value: "ibm-es"
effect: "NoSchedule"
dataResources:
requests:
cpu: 500m
memory: 1024Mi
limits:
cpu: 1000m
memory: 10Gi
resources:
requests:
cpu: 750m
memory: 1024Mi
limits:
cpu: 1000m
memory: 1024Mi
NOTE: Ensure to copy all required values from the old Custom Resource yaml to the new one
ELASTIC SEACH DEPLOYMENT
Go to es/deploy directory and apply below commands,
oc apply -f crds/charts_v1_esoperator_crd.yaml
oc apply -f service_account.yaml
oc apply -f role.yaml
oc apply -f role_binding.yaml
oc apply -f operator.yaml
# Run the following command to upgrade Custom Resource once ES operator pod is up and running
oc apply -f crds/charts_v1_esoperator_cr.yaml
Verify that the pods relects PMF 9.1 images, If ES pods do not refect PMF 9.1 images then uninstall MFP 8.0 es-operator and install PMF 9.1 es-operator
a. To unsinstall MFP 8.0 es-operator, go to MFP 8.0 es/deploy directory and run below commands
oc delete -f crds/charts_v1_esoperator_cr.yaml
oc delete -f service_account.yaml
oc delete -f role.yaml
oc delete -f role_binding.yaml
oc delete -f operator.yaml
oc delete -f crds/charts_v1_esoperator_crd.yaml
b. To install PMF 9.1 es-operator, go to PMF 9.1 es/deploy directory and run below commands
oc apply -f crds/charts_v1_esoperator_crd.yaml
oc apply -f service_account.yaml
oc apply -f role.yaml
oc apply -f role_binding.yaml
oc apply -f operator.yaml
oc apply -f crds/charts_v1_esoperator_cr.yaml
c.Verify that ES pods reflect PMF 9.1 images
NOTE: If analytics component is used and analytics data snapshot was taken before the migration then refer the link to restore the analytics data.
TESTING
After the deployment, verify console URL is up and running using: «protocol»://«hostname»/mfpconsole
▲