Deploying PMF on existing container platform

Prerequisites

Ensure you have the following prerequisites before you start the process of installing PMF instance using the PMF operator.

  • Installed and configured Red Hat® OpenShift® Container Platform Version cluster v4.15.
  • Installed and configured OpenShift CLI (oc).
  • Installed and configured Docker.
  • Created a supported database and have the database access details handy for further use. See Setting Up Databases.
  • Ensured that PMF Analytics has storage class or mounted storage volume for persisting Analytics data (NFS recommended).
  • Downloaded the PMF package for Red Hat® OpenShift® from the authorized link and unpacked the archive to a directory.

Installing the PMF instance

Install the PMF instance by following steps.

  • Deploy PMF OpenShift Container Platform (OCP) package

  • Deploy the PMF Operator

  • Deploy PMF components (if required)

  • Deploy PMF Analytics component

  • Accessing PMF component consoles

Deploying PMF OpenShift Container Platform (OCP) package

To deploy the PMF OpenShift Container Platform (OCP) package to Red Hat OpenShift cluster proceed as follows.

  1. Log in to the Red Hat OpenShift cluster with administrator privileges and create new namespace by using the following command.

    oc new-project mfp
    
  2. Create docker registry  secret by replacing the apikey and password by using the following command.

    oc create secret docker-registry -n mfp mfp-image-pullsecret --docker-server=index.docker.io --docker-username=<username> --docker-password=<your_password>
    
  3. Create console secrets for the respective components by using the following command.

    oc create secret generic serverlogin --from-literal=MFPF_ADMIN_USER=admin --from-literal=MFPF_ADMIN_PASSWORD=admin 
    
    oc create secret generic appcenterlogin --from-literal=MFPF_APPCNTR_ADMIN_USER=admin --from-literal=MFPF_APPCNTR_ADMIN_PASSWORD=admin
    
  4. Create a secret with database credentials by using the following command.

    cat <<EOF | oc apply -f -
    apiVersion: v1
    data:
       MFPF_ADMIN_DB_USERNAME: <base64-encoded-string>
       MFPF_ADMIN_DB_PASSWORD: <base64-encoded-string>
       MFPF_RUNTIME_DB_USERNAME: <base64-encoded-string>
       MFPF_RUNTIME_DB_PASSWORD: <base64-encoded-string>
       MFPF_PUSH_DB_USERNAME: <base64-encoded-string>
       MFPF_PUSH_DB_PASSWORD: <base64-encoded-string>
       MFPF_LIVEUPDATE_DB_USERNAME: <base64-encoded-string>
       MFPF_LIVEUPDATE_DB_PASSWORD: <base64-encoded-string>
       MFPF_APPCNTR_DB_USERNAME: <base64-encoded-string>
       MFPF_APPCNTR_DB_PASSWORD: <base64-encoded-string>
    kind: Secret
    metadata:
       name: mfpf-server-db-secret
    type: Opaque
    EOF
    

    Note: An encoded string can be obtained by using the following command.

    echo -n <string-to-encode> | base64
    
  5. Optional: Connect to the Db2 database that runs on the Secure Sockets Layer (SSL), proceed as follows.

    • If the schema already exists, set the property enabled to false under global.dbinit.

    • If the schema does not exist, set the property enabled to true under global.dbinit so that the script creates tables according to the enable components.

    To establish connection with the existing schema, set the following values.

    • Run the following keytool command to create a truststore file based on the Db2 SSL certificate.

      keytool -importcert -keystore trustStore.jks -storepass pmfcloud -file DigiCertGlobalRootCA.crt -alias db2sslcert
      

      For the IBM Cloud Db2 database, download the SSL certificate from the Settings page of the Db2 dashboard.

      NOTE: Do not change the name of the truststore file or trustStore.jks file.

    • Create a secret with the truststore file and truststore password by using the following command.

      oc create secret generic db2sslsecret --from-file=./trustStore.jks --from-literal=TRUSTSTORE_PASSWORD=pmfcloud
      

      You can choose to have a different password.

    • In the deploy/crds/charts_v1_mfoperator_cr.yaml file, set the value of the ssl property to “true” and also provide the secret created in the previous step for the sslTrustStoreSecret of the db section.

NOTE:

Deploying the PMF Operator

To deploy PMF Operator, proceed as follows.

  1. Update “image pull secret name” by replacing the REPLACE_SECRET placeholder in the deploy/service_account.yaml file.
  2. Update “namespace name” by replacing the REPLACE_NAMESPACE placeholder in the deploy/role_binding.yaml file.
  3. Navigate to the deploy folder inside PMF Red Hat OpenShift package and run the following commands to deploy CRD, operator, and install Security Context Constraints (SCC).

    export MFOS_PROJECT=<namespace_to_deploy_mobilefoundation>
    oc create -f deploy/crds/charts_v1_mfoperator_crd.yaml 
    oc create -f deploy/service_account.yaml 
    oc create -f deploy/role.yaml 
    oc create -f deploy/role_binding.yaml 
    oc create –f deploy/scc.yaml 
    oc adm policy add-scc-to-group mf-operator system:serviceaccounts:$MFOS_PROJECT 
    oc create -f deploy/operator.yaml 
    oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:$MFOS_PROJECT:mf-operator
    

Deploying PMF components

To deploy any of the PMF components, proceed as follows.

  1. Modify the custom resource configuration deploy/crds/charts_v1_mfoperator_cr.yaml file as per your requirements. For more information, see Custom Resource definitions.

    oc apply -f deploy/crds/charts_v1_mfoperator_cr.yaml
    
  2. Run the following command and ensure the pods are created and running successfully.

    oc get pods
    

    In a deployment scenario where PMF Server and push are enabled with 3 replicas each (default), following output is expected.

    NAME                           READY     STATUS    RESTARTS   AGE
    mf-operator-5db7bb7w5d-b29j7   1/1       Running   0          1m
    mfpf-server-2327bbewss-3bw31   1/1       Running   0          1m 20s
    mfpf-server-29kw92mdlw-923ks   1/1       Running   0          1m 21s
    mfpf-server-5woxq30spw-3bw31   1/1       Running   0          1m 19s
    mfpf-push-2womwrjzmw-239ks     1/1       Running   0          59s
    mfpf-push-29kw92mdlw-882pa     1/1       Running   0          52s
    mfpf-push-1b2w2s973c-983lw     1/1       Running   0          52s
    

    Note: Pods in Running (1/1) status shows that the service is available for access.

  3. Check if the routes are created for accessing the PMF endpoints by running the following command.

    oc get routes
    NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
    ibm-mf-cr-1fdub-mfp-ingress-57khp myhost.mydomain.com /imfpush ibm-mf-cr--mfppush 9080 None
    ibm-mf-cr-1fdub-mfp-ingress-8skfk myhost.mydomain.com /mfpconsole ibm-mf-cr--mfpserver 9080 None
    ibm-mf-cr-1fdub-mfp-ingress-dqjr7 myhost.mydomain.com /doc ibm-mf-cr--mfpserver 9080 None
    ibm-mf-cr-1fdub-mfp-ingress-ncqdg myhost.mydomain.com /mfpadminconfig ibm-mf-cr--mfpserver 9080 None
    ibm-mf-cr-1fdub-mfp-ingress-x8t2p myhost.mydomain.com /mfpadmin ibm-mf-cr--mfpserver 9080 None
    ibm-mf-cr-1fdub-mfp-ingress-xt66r myhost.mydomain.com /mfp ibm-mf-cr--mfpserver 9080 None
    
    

Important : To access the PMF instances after deployment you need to configure the ingress hostname. Ensure that ingress is configured in the Custom Resource. For more information, see Configuring Ingress parameters.

Deploying PMF Analytics

To deploy PMF Analytics component, proceed as follows.

Prerequisites

  1. You must ensure that you have created PersistentVolume (PV) and PersistentVolumeClaim (PVC) or StorageClass.
  2. Ensure that you deploy the Elasticsearch operator. For more information, see Deploy Elasticsearch operator.
  3. Release 9.1 Create the config.properties file at preferred location and add following variables in it:

    allowed.hostname= 
    maximum.request=20 
    time.window=
  4. Create a configmap and provide a full file path of the config.properties file. For example :

    oc create configmap analytics-custom-config --from-file= /<path>/config.properties
    

    Ensure that you update the configmap in the deployment file for analytics. Edit the deployment yaml file and specify this configmap name else Analytics will not be able to identify user configuration and continue to run on the default settings.

    Release 9.2 The config.properties file is automatically created for the PMF Server console, PMF Appcenter console, and PMF Analytics console.

  5. Enable mfpanalytics component in the MFOperator CR yaml file, if not already done and reapply the cr yaml.

    kubectl apply -f crds/charts_v1_mfoperator_cr.yaml
    

    Note: Any changes to the configmap, need a restart of analytics pod to reflect the updates for that you can simply delete the pod.

Deploying Elasticsearch operator

To deploy Elasticsearch operator, proceed as follows.

  1. Log in to the Red Hat OpenShift cluster and create a new project.

    oc login -u <username> -p <password> <cluster-url>
    oc new-project mfp
    
  2. Create docker registry secret by using the following command and replace the apikey and password.

    oc create secret docker-registry -n mfp mfp-image-pullsecret --docker-server=index.docker.io --docker-username=<username> --docker-password=<your_password>
    
  3. Add “image pull secret” (mfp-image-pullsecret) by replacing the REPLACE_SECRET placeholder in the es/deploy/service_account.yaml file.
  4. Update “namespace name” by replacing the REPLACE_NAMESPACE placeholder in the es/deploy/role_binding.yaml file.

  5. Specify either claimName (PVC) or storageClassName in the es/deploy/crds/charts_v1_esoperator_cr.yaml file.

    persistence:
          storageClassName: ""
          claimName: ""
    
  6. To use claimName, PersistentVolume and PersistentVolumeClaim should be configured by using the following command.

    PersistentVolume

       cat <<EOF | kubectl apply -f -
       apiVersion: v1
       kind: <PersistentVolume>
       metadata:
          labels:
          name: mfanalyticspv  
          name: mfanalyticspv
       spec:
          capacity:
          storage: 20Gi
          accessModes:
          - ReadWriteMany
          persistentVolumeReclaimPolicy: Retain
          nfs:
          path: <nfs-mount-volume-path>
          server: <nfs-server-hostname-or-ip>
       EOF
    

    PersistentVolumeClaim

       cat <<EOF | kubectl apply -f -
          apiVersion: v1
          kind: PersistentVolumeClaim
          metadata:
          name: mfanalyticsvolclaim
          namespace: <projectname-or-namespace>
          spec:
          accessModes:
             - ReadWriteMany
          resources:
             requests:
                storage: 20Gi
          selector:
             matchLabels:
                name: mfanalyticspv
          volumeName: mfanalyticspv
       EOF
    

    Note: Ensure that you add the nfs-server-hostname-or-ip and nfs-mount-volume-path entries in the yaml file and ensure that the PVC is in bound state.

  7. Execute the following commands to deploy the Elasticsearch operator.

    oc create -f es/deploy/crds/charts_v1_esoperator_crd.yaml
    oc create -f es/deploy/service_account.yaml
    oc create -f es/deploy/role.yaml
    oc create -f es/deploy/role_binding.yaml
    oc adm policy add-scc-to-group anyuid system:serviceaccounts:mfp  
    oc create -f es/deploy/operator.yaml
    
    oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:mfp:es-operator
    

    Ensure to update image pull secret and storageclass/pvc name in Custom resource yaml:

     oc apply -f es/deploy/crds/charts_v1_esoperator_cr.yaml
    

After the deployment is completed, Elasticsearch runs as an internal service and can be used by Persistent Mobile Foundation Analytics.

Note: If if Elasticsearch is deployed in a different namespace then Persistent Mobile Foundation Analytics. then while deploying Persistent Mobile Foundation Analytics, update esnamespace in the charts_v1_esoperator_cr.yaml with the project name where Elasticsearch is deployed.

Accessing PMF component consoles

Following are the endpoints for accessing the consoles of PMF components

PMF console URL
PMF Server Administration console http://{ingress_hostname}/mfpconsole
Operational Analytics console http://{ingress_hostname}/analytics/console
Application Center console http://{ingress_hostname}/appcenterconsole

For Operator image, a default random console password is generated, that can be obtained by using the following commands depending on the components you have chosen during the deployment.

oc get secret mf-mfpserver-consolesecret -o jsonpath='{.data.MFPF_ADMIN_PASSWORD}' | base64 -D
oc get secret mf-mfpanalytics-consolesecret -o jsonpath='{.data.MFPF_ANALYTICS_ADMIN_PASSWORD}' | base64 -D
oc get secret mf-mfpappcenter-consolesecret -o jsonpath='{.data.MFPF_APPCNTR_ADMIN_PASSWORD}' | base64 -D

You can override the default behaviour. For more details, refer Creating custom-defined console login secrets.

Uninstalling PMF on Cloud

Uninstall by using the following commands.

oc delete -f deploy/crds/charts_v1_mfoperator_cr.yaml

If the above command gets stuck, then run the following patch command.

oc patch crd/mfoperators.mf.ibm.com -p '{"metadata":{"finalizers":[]}}' --type=merge"
oc delete -f deploy/
oc delete -f deploy/crds/charts_v1_mfoperator_crd.yaml

Uninstall Elasticsearch by using the following command.

oc delete -f es/deploy/crds/charts_v1_esoperator_cr.yaml

If the above command gets stuck, then run the following patch command.

oc patch crd/esoperators.es.ibm.com -p '{"metadata":{"finalizers":[]}}' --type=merge"
oc delete -f es/deploy/
oc delete -f es/deploy/crds/charts_v1_esoperator_crd.yaml

Pushing PMF images to a private docker container registry (Optional)

To push PMF images to a private docker container registry, proceed as follows.

  1. Unpack the PMF package into a work directory (for example, mfoskpg).

    mkdir mfospkg
    tar xzvf PMF-OpenShift-Pak-<version>.tar.gz -C mfospkg/
    
  2. Load and push the images to the container registry from local machine by using the following commands with values as per your environment.

      #!/bin/bash
    
      export CONTAINER_REGISTRY_URL="index.docker.io/persistentmobilefoundation"
      export OPERATOR_IMAGE_TAG="9.1.0"
      export IMAGE_TAG="9.1.0"
    
      cd images
    
      ls * | xargs -I{} docker load --input {}
    
      for file in * ; do
    
      docker tag ${file/.tar.gz/} ${CONTAINER_REGISTRY_URL}/${file/.tar.gz/}
      docker push ${CONTAINER_REGISTRY_URL}/${file/.tar.gz/}
    
      done
    
      for MF_IMAGE in "mfpf-server" "mfpf-analytics" "mfpf-push" "mfpf-analytics-recvr" "mfpf-liveupdate" "mfpf-appcenter" "mfpf-elasticsearch" "mf-operator" "es-operator"
    
      do
    
      docker manifest create ${CONTAINER_REGISTRY_URL}/${MF_IMAGE}:${OPERATOR_IMAGE_TAG} ${CONTAINER_REGISTRY_URL}/${MF_IMAGE}:${OPERATOR_IMAGE_TAG}-amd64 --amend --insecure
    
      docker manifest annotate ${CONTAINER_REGISTRY_URL}/${MF_IMAGE}:${IMAGE_TAG} ${CONTAINER_REGISTRY_URL}/${MF_IMAGE}:${IMAGE_TAG}-amd64 --os linux --arch amd64
    
      docker manifest push ${CONTAINER_REGISTRY_URL}/${MF_IMAGE}:${IMAGE_TAG} --insecure
    
      done
    

Where,

  • CONTAINER_REGISTRY_URL= Your private container registry
  • OPERATOR_IMAGE_TAG and IMAGE_TAG= Downloaded PMF image version

Troubleshooting PMF on Cloud

Last modified on