Deploy PMF to an existing Red Hat OpenShift Container Platform

Learn how to install the PMF instance on an OpenShift cluster using the PMF Operator.

The steps to deploy PMF on OCP are the same irrespective of how you have obtained the OCP entitlement.

Prerequisites

Following are the prerequisites before you begin the process of installing PMF instance using the PMF Operator.

  • OpenShift cluster v4.15.
  • OpenShift client tools (oc).
  • Install and setup Docker.
  • PMF requires a database. Create a supported database and keep the database access details handy for further use. See here.
  • PMF Analytics requires storage class or mounted storage volume for persisting Analytics data (NFS recommended).

Architecture

Image below shows the internal architecture of Mobile services on Red Hat OpenShift.

Installing a PMF instance

Download the PMF package

Download the PMF package for OpenShift from the authorized link. Unpack the archive to a directory.

(Optional) Push PMF images to a private docker container registry

To push PMF images to a private docker container registry, follow below steps #1. Unpack the PMF package into a work directory (say mfoskpg).

mkdir mfospkg
tar xzvf PMF-OpenShift-Pak-<version>.tar.gz -C mfospkg/

#2. Load and push the images to the container registry from local machine. Ensure the following commands are run with values as per your environment.

   #!/bin/bash

   export CONTAINER_REGISTRY_URL="index.docker.io/persistentmobilefoundation"
   export OPERATOR_IMAGE_TAG="9.1.0"
   export IMAGE_TAG="9.1.0"

   cd images

   ls * | xargs -I{} docker load --input {}

   for file in * ; do

      docker tag ${file/.tar.gz/} ${CONTAINER_REGISTRY_URL}/${file/.tar.gz/}
      docker push ${CONTAINER_REGISTRY_URL}/${file/.tar.gz/}

   done

   for MF_IMAGE in "mfpf-server" "mfpf-analytics" "mfpf-push" "mfpf-analytics-recvr" "mfpf-liveupdate" "mfpf-appcenter" "mfpf-elasticsearch" "mf-operator" "es-operator"
   do

      docker manifest create ${CONTAINER_REGISTRY_URL}/${MF_IMAGE}:${OPERATOR_IMAGE_TAG} ${CONTAINER_REGISTRY_URL}/${MF_IMAGE}:${OPERATOR_IMAGE_TAG}-amd64 --amend --insecure

      docker manifest annotate ${CONTAINER_REGISTRY_URL}/${MF_IMAGE}:${IMAGE_TAG} ${CONTAINER_REGISTRY_URL}/${MF_IMAGE}:${IMAGE_TAG}-amd64 --os linux --arch amd64

      docker manifest push ${CONTAINER_REGISTRY_URL}/${MF_IMAGE}:${IMAGE_TAG} --insecure

   done
  • Update CONTAINER_REGISTRY_URL to your own private container registry,
  • OPERATOR_IMAGE_TAG and IMAGE_TAG should be updated to PMF downloaded image version.

Set up the OpenShift project for PMF

Follow the steps outlined in this section to deploy the PMF OpenShift Container Platform (OCP) package to Red Hat OpenShift cluster.

#1. Log in to Red Hat OpenShift cluster with administrator privileges and create new namespace using below command.

    oc new-project mfp

#2. Create docker registry  secret by replacing the apikey and password in the following command.

    oc create secret docker-registry -n mfp mfp-image-pullsecret --docker-server=index.docker.io --docker-username=<username> --docker-password=<your_password>

#3. Create console secrets for the respective components by using following command.

   oc create secret generic serverlogin --from-literal=MFPF_ADMIN_USER=admin --from-literal=MFPF_ADMIN_PASSWORD=admin 
  
   oc create secret generic appcenterlogin --from-literal=MFPF_APPCNTR_ADMIN_USER=admin --from-literal=MFPF_APPCNTR_ADMIN_PASSWORD=admin

#4. Create a secret with database credentials.

    cat <<EOF | oc apply -f -
    apiVersion: v1
    data:
      MFPF_ADMIN_DB_USERNAME: <base64-encoded-string>
      MFPF_ADMIN_DB_PASSWORD: <base64-encoded-string>
      MFPF_RUNTIME_DB_USERNAME: <base64-encoded-string>
      MFPF_RUNTIME_DB_PASSWORD: <base64-encoded-string>
      MFPF_PUSH_DB_USERNAME: <base64-encoded-string>
      MFPF_PUSH_DB_PASSWORD: <base64-encoded-string>
      MFPF_LIVEUPDATE_DB_USERNAME: <base64-encoded-string>
      MFPF_LIVEUPDATE_DB_PASSWORD: <base64-encoded-string>
      MFPF_APPCNTR_DB_USERNAME: <base64-encoded-string>
      MFPF_APPCNTR_DB_PASSWORD: <base64-encoded-string>
    kind: Secret
    metadata:
      name: mfpf-server-db-secret
    type: Opaque
    EOF

NOTE: An encoded string can be obtained using echo -n | base64

#5. (Optional) To connect to Db2 database that runs on SSL, do the following steps:

If the schema already exists,

  • Set the property enabled to false under global.dbinit

If the schema does not exist,

  • Set the property enabled to true under global.dbinit so that the script will create tables according to the enable components.

To establish connection with the existing schema, set the following values: 

a. Run the following keytool command to create a truststore file based on the Db2 SSL certificate:

If you are referring to Db2 database on IBM Cloud, then download the SSL certificate from the settings page of the Db2 dashboard.

   keytool -importcert -keystore trustStore.jks -storepass pmfcloud -file DigiCertGlobalRootCA.crt -alias db2sslcert

NOTE: Do not change the name of the truststore file, trustStore.jks.

b. Create a secret with truststore file and truststore password. You can choose to have a different password.

   oc create secret generic db2sslsecret --from-file=./trustStore.jks --from-literal=TRUSTSTORE_PASSWORD=pmfcloud

c. In deploy/crds/charts_v1_mfoperator_cr.yaml file, set the value of ssl property to true and also provide the secret created in the previous step for the sslTrustStoreSecret of the db section.

NOTE: To enable/disable specific PMF components, Refer Custom Resource definitions for the details.

NOTE: To enable PMF Analytics, Elasticsearch operator must be deployed, For steps to deploy Elasticsearch operator, see here.

Deploy the PMF Operator

  1. Update image pull secret name in deploy/service_account.yaml (REPLACE_SECRET).
  2. Update namespace name in deploy/role_binding.yaml (REPLACE_NAMESPACE).
  3. Navigate to deploy folder inside PMF OpenShift package and run the following commands to deploy CRD, operator and install Security Context Constraints (SCC).
  export MFOS_PROJECT=<namespace_to_deploy_mobilefoundation>
  oc create -f deploy/crds/charts_v1_mfoperator_crd.yaml 
  oc create -f deploy/service_account.yaml 
  oc create -f deploy/role.yaml 
  oc create -f deploy/role_binding.yaml 
  oc create –f deploy/scc.yaml 
  oc adm policy add-scc-to-group mf-operator system:serviceaccounts:$MFOS_PROJECT 
  oc create -f deploy/operator.yaml 
  oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:$MFOS_PROJECT:mf-operator

Deploy PMF components

#1. To deploy any of the PMF components, modify the custom resource configuration deploy/crds/charts_v1_mfoperator_cr.yaml according to your requirements. Complete reference to the custom configuration is found here.

IMPORTANT NOTE : To access the PMF instances after deployment we need to configure ingress hostname. Please make sure ingress is configured in the custom resource configuration. Refer this link on configuring the same.

    oc apply -f deploy/crds/charts_v1_mfoperator_cr.yaml

#2. Run the following command and ensure the pods are created and running successfully. In a deployment scenario where PMF Server and push are enabled with 3 replicas each (default), the output looks as shown below.

      $ oc get pods
      NAME                           READY     STATUS    RESTARTS   AGE
      mf-operator-5db7bb7w5d-b29j7   1/1       Running   0          1m
      mfpf-server-2327bbewss-3bw31   1/1       Running   0          1m 20s
      mfpf-server-29kw92mdlw-923ks   1/1       Running   0          1m 21s
      mfpf-server-5woxq30spw-3bw31   1/1       Running   0          1m 19s
      mfpf-push-2womwrjzmw-239ks     1/1       Running   0          59s
      mfpf-push-29kw92mdlw-882pa     1/1       Running   0          52s
      mfpf-push-1b2w2s973c-983lw     1/1       Running   0          52s

NOTE: Pods in Running (1/1) status shows the service is available for access. #3. Check if the routes are created for accessing the PMF endpoints by running the following command.

    $ oc get routes
    NAME                                      HOST/PORT               PATH        SERVICES             PORT      TERMINATION   WILDCARD
    ibm-mf-cr-1fdub-mfp-ingress-57khp   myhost.mydomain.com   /imfpush          ibm-mf-cr--mfppush     9080                    None
    ibm-mf-cr-1fdub-mfp-ingress-8skfk   myhost.mydomain.com   /mfpconsole       ibm-mf-cr--mfpserver   9080                    None
    ibm-mf-cr-1fdub-mfp-ingress-dqjr7   myhost.mydomain.com   /doc              ibm-mf-cr--mfpserver   9080                    None
    ibm-mf-cr-1fdub-mfp-ingress-ncqdg   myhost.mydomain.com   /mfpadminconfig   ibm-mf-cr--mfpserver   9080                    None
    ibm-mf-cr-1fdub-mfp-ingress-x8t2p   myhost.mydomain.com   /mfpadmin         ibm-mf-cr--mfpserver   9080                    None
    ibm-mf-cr-1fdub-mfp-ingress-xt66r   myhost.mydomain.com   /mfp              ibm-mf-cr--mfpserver   9080                    None

Deploying PMF Analytics

Elasticsearch operator is a prerequisite to deploy Persistent Mobile Foundation Analytics on OpenShift cluster.

Prerequisites

#1. (Mandatory) A pre-created PersistentVolume (PV) and PersistentVolumeClaim (PVC) or Storageclass should be available.

#2. Create the config.properties file at preferred location and add below variables in it: 

   allowed.hostname= 
   maximum.request=20 
   time.window=

allowed.hostname : This property indicates single/list of whitelisted servers those are granted to send request to the application.  For example:  allowed.hostname= Above configuration would allow analytics to work only on mentioned, all other request received other than the mentioned would be rejected.  If no value is provided, then default considered server name for whitelisting would be localhost.

maximum.request and time.window : This property indicates maximum number of invalid login attempts those are granted in a configured time interval defined in a property time.window. That means if request count is set as 20 and time windows is set as 1, then 19 invalid login attempts would be allowed in a 1 minute.  Post 19 requests unless configured time window of 1 minute doesn’t gets lapsed, user will not be allowed to login even with correct credentials. 

#3. Create a configmap and provide a full file path of config.properties. For example :  

   oc create configmap analytics-custom-config --from-file= /opt/config.properties

You can set this path as desired, no mandate to keep the config.properties file under /opt.    Make sure update this configmap in deployment file for analytics you can edit the deployment yaml and mention the configmap name otherwise Analytics will not be able to identify user configuration and continue to run on default settings.  #4. If mfpanalytics component is not enabled already then enable it in MFOperator CR yaml file and reapply the cr yaml.

kubectl apply -f crds/charts_v1_mfoperator_cr.yaml

#5. Whenever you make any changes to configmap, restart of analytics pod is must to reflect the updates for that you can simply delete the pod.

Deploy Elasticsearch operator

#1. Log in to the Openshift cluster and create a new project.

   oc login -u <username> -p <password> <cluster-url>
   oc new-project mfp

#2. Create docker registry secret by replacing the apikey and password in the following command.

   oc create secret docker-registry -n mfp mfp-image-pullsecret --docker-server=index.docker.io --docker-username=<username> --docker-password=<your_password>

#3. Add ImagePullSecret (mfp-image-pullsecret) by replacing REPLACE_SECRET placeholder in the file es/deploy/service_account.yaml.

#4. Update Namespace name by replacing REPLACE_NAMESPACE placeholder in the file es/deploy/role_binding.yaml.

#5. For Elasticsearch deployment, either claimName (PVC) or storageClassName must be specified in es/deploy/crds/charts_v1_esoperator_cr.yaml.

   persistence:
     storageClassName: ""
     claimName: ""

#6. To use claimName, PV and PVC should be configured. Use the following command configure a PersistentVolume (PV).

    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      labels:
        name: mfanalyticspv  
      name: mfanalyticspv
    spec:
      capacity:
        storage: 20Gi
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      nfs:
        path: <nfs-mount-volume-path>
        server: <nfs-server-hostname-or-ip>
    EOF

To configure a PersistentVolumeClaim (PVC) use the following command.

     cat <<EOF | kubectl apply -f -
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: mfanalyticsvolclaim
        namespace: <projectname-or-namespace>
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 20Gi
        selector:
          matchLabels:
            name: mfanalyticspv
        volumeName: mfanalyticspv
      EOF

Note: Ensure that you add the nfs-server-hostname-or-ip and nfs-mount-volume-path entries in the yaml and ensure that the PVC is in bound state.

#7. Execute the following commands to deploy the Elasticsearch operator.

    oc create -f es/deploy/crds/charts_v1_esoperator_crd.yaml
    oc create -f es/deploy/service_account.yaml
    oc create -f es/deploy/role.yaml
    oc create -f es/deploy/role_binding.yaml
    oc adm policy add-scc-to-group anyuid system:serviceaccounts:mfp  
    oc create -f es/deploy/operator.yaml  
   oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:mfp:es-operator

Ensure to update image pull secret and storageclass/pvc name in Custom resource yaml:

   oc apply -f es/deploy/crds/charts_v1_esoperator_cr.yaml

#8. After the deployment is completed, Elasticsearch runs as an internal service and can be used by Persistent Mobile Foundation Analytics.

While deploying Persistent Mobile Foundation Analytics, update esnamespace in the ESOperator CR yaml with the project name where Elasticsearch is deployed. This change is required only if Elasticsearch is deployed in a different namespace then Persistent Mobile Foundation Analytics

Accessing the console of PMF components

Following are the endpoints for accessing the consoles of PMF components

  • PMF Server Administration Console - http://<ingress_hostname>/mfpconsole
  • Operational Analytics Console - http://<ingress_hostname>/analytics/console
  • Application Center Console - http://<ingress_hostname>/appcenterconsole

For Operator image, a default random console password will be generated, which can be obtained by running the commands below depending on the components you chose during the deployment. You can override the default behaviour. For more details, refer here.

oc get secret mf-mfpserver-consolesecret -o jsonpath='{.data.MFPF_ADMIN_PASSWORD}' | base64 -D
oc get secret mf-mfpanalytics-consolesecret -o jsonpath='{.data.MFPF_ANALYTICS_ADMIN_PASSWORD}' | base64 -D
oc get secret mf-mfpappcenter-consolesecret -o jsonpath='{.data.MFPF_APPCNTR_ADMIN_PASSWORD}' | base64 -D

Uninstall

Use the following commands to perform uninstallation:

oc delete -f deploy/crds/charts_v1_mfoperator_cr.yaml

If the above command gets stuck, then run the patch command:

oc patch crd/mfoperators.mf.ibm.com -p '{"metadata":{"finalizers":[]}}' --type=merge"
oc delete -f deploy/
oc delete -f deploy/crds/charts_v1_mfoperator_crd.yaml

Uninstall Elasticsearch

oc delete -f es/deploy/crds/charts_v1_esoperator_cr.yaml

If the above command gets stuck, then run the patch command:

oc patch crd/esoperators.es.ibm.com -p '{"metadata":{"finalizers":[]}}' --type=merge"
oc delete -f es/deploy/
oc delete -f es/deploy/crds/charts_v1_esoperator_crd.yaml

Troubleshooting

  1. For installation related issues, perform the following actions: You might encounter issues when you install PMF on Red Hat OpenShift cluster. Following are some of the common issues you may encounter during the installation process:

    1. mf-operator pod does not show up Possible cause: The scc.yaml is not deployed or mf-operator scc assignment is not done. Error:

      $ oc get rs
      NAME                    DESIRED   CURRENT   READY   AGE
      mf-operator-87b88494f   1         0         0       33s
      $ oc describe rs mf-operator-87b88494f
      
      Events:
      Type     Reason        Age                   From                   Message
      ----     ------        ----                  ----                   -------
      Warning  FailedCreate  96s (x14 over 2m17s)  replicaset-controller  Error creating: pods "mf-operator-87b88494f-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 1001: must be in the ranges: [1000570000, 1000579999]] 
      

      Resolution: Assign mf-operator scc by using the following command:

      oc adm policy add-scc-to-group mf-operator system:serviceaccounts:<project-name>
      
    2. Operator pod STATUS shows ErrImagePull

      Possible cause: Image pull secret is not updated in service_account.yaml. The updated pull secret does not exist or the secret exists with wrong registry credentials.

      Error:

      $ oc get pods
      NAME                          READY   STATUS         RESTARTS   AGE
      mf-operator-87b88494f-gtpq2   0/1     ErrImagePull   0          4s
      

      Resolution: Ensure that the image pull secret is created with right registry credentials and the same secret name is updated in service_account.yaml.

    3. After deploying custom resource, no pods show up.

      Possible cause: The namespace name is not updated in role_binding.yaml or wrong namespace name is updated. Error in operator pod logs:

      Failed to list mf.ibm.com/v1, Kind=MFOperator: mfoperators.mf.ibm.com is forbidden: User "system:serviceaccount:mfnew:mf-operator" cannot list resource "mfoperators" in API group "mf.ibm.com" in the namespace "mfnew"
      

      Resolution: Ensure proper namespace name is updated in role_binding.yaml.

    4. The dbinit-job shows error

      Possible cause: Db details (host, port and name) provided in custom resource(charts_v1_mfoperator_cr.yaml) are not correct, Db is not reachable or dbsecret is created with wrong credentials.

      Error:

      $ oc get pods
      NAME                          READY   STATUS    RESTARTS   AGE
      ibm-mf-dbinit-job-9c2gb       0/3     Error     3          32s
      

      If db-init pod log reports error particularly for TABLE PUSH_DEVICES, then PMF database could have been created without PAGESIZE. Refer here to create a database with PAGESIZE.

      Resolution: Correct the DB details and redeploy.

    5. PMF pods show 0/1 READY

       $ oc get pods
       NAME                              READY   STATUS      RESTARTS   AGE
       ibm-mf-defaultsecrets-job-7z42m   0/1     Completed   0          5m21s
       ibm-mf-push-77fb65c758-wtg28      0/1     Running     0          5m16s
       ibm-mf-server-d87ddf67f-x9n4x     0/1     Running     0          5m16s
      

      Check if the Database that PMF is connected to is reachable or not. For more details on the issue, check the pod logs.

      Resolution: If no errors are shown on the pod logs and Db is reachable, delete the existing pods so that new pods are recreated. For any other issues, check the operator pod logs to get more details on the issue.

    6. Elasticsearch data pod is not coming up properly

      Possible cause: Storage class is not provided in custom resource (charts_v1_esoperator_cr.yaml) or the provided PVC does not have proper access for the Elasticsearch to write data.

      Resolution: If the claimName is provided in custom resource(charts_v1_esoperator_cr.yaml), then make sure Elasticsearch data has access to the mount location. Run the following commands on the mount path:

       chown -R 1001:1001 <mount_path>
       chmod -R ug+rwx <mount_path>
      
    7. PMF routes are not created/accessible

      Possible cause: spec.ingress.hostname is not updated in custom resource yaml or pods are not properly running.

      Resolution: Update spec.ingress.hostname and redeploy the custom resource.

  2. For PMF deployment related issues, share the following information with PSL Support:
    • Version of the mf-operator installed. You can get it from deploy/operator.yaml file
    • PMF custom resource (charts_v1_mfoperator_cr.yaml)
    • Output of oc get pods
    • Operator pod logs (oc logs <mf-operator-pod-name>)
    • Output of command oc describe pod <pod-name> for each pod
    • PMF pod logs (oc logs <pod-name>)of all pods
  3. For PMF functionality related issues: Enable PMF traces using Custom configuration described here Get the logs by running the following command:

    oc cp <server-pod-name>:/logs/messages.log ./server-messages.log
    oc cp <server-pod-name>:/logs/trace.log ./server-trace.log
    
  4. For es-operator related issues, share the following information with PSL Support:
    • Version of the es-operator installed. You can get it from es/deploy/operator.yaml file
    • Elasticsearch custom resource (charts_v1_esoperator_cr.yaml)
    • Output of oc get pods
    • Operator pod logs (oc logs <es-operator-pod-name>)
    • Output of command oc describe pod <pod-name> for each pod
    • Elasticsearch pod logs (oc logs <pod-name>)of all pods

Additional References

  1. Setting up of PMF databases
  2. Custom Resource configuration parameters for PMF
  3. Scenarios in enabling Ingress
Last modified on