Template project for Mojaloop connector for a core banking system.
Technologies
- Apache Camel
- Apache CXF
- Apache Maven
- DataSonnet
- Spring Boot
- Swagger
- OpenAPI Generator Maven Plugin
- Prometheus JVM Client
On copy this template project, rename the project name in following places:
TODO: add places where project name must be renamed
Each core connector repo should have its own README instructions how to run the application following the template:
Most of the details and instructions about development and deployment are in the main template project README. Only additional or different topics are specified below.
To run application and run the Maven build and specify the required system variables.
mvn clean installjava \ -Dml-conn.outbound.host="http://localhost:4001" \ -Ddfsp.host="https://dfsp/api" \ -Ddfsp.username="user" \ -Ddfsp.password="pass" \ -Ddfsp.tenant-id="id" \ -Ddfsp.channel-id="channelId" \ -Ddfsp.api-key="apiKey" \ -jar ./core-connector/target/core-connector.jar
To run application as Docker image, run the docker build from the project root folder and then docker run specifying the required environments variables.
docker build -t core-connector:latest .
docker run --rm \ -e MLCONN_OUTBOUND_ENDPOINT="http://localhost:4001" \ -e DFSP_HOST="https://dfsp/api" \ -e DFSP_USERNAME="user" \ -e DFSP_PASSWORD="P\@ss0rd" \ -e DFSP_AUTH_TENANT_ID="id" \ -e DFSP_AUTH_CHANNEL_ID="channelId" \ -e DFSP_AUTH_API_KEY="apiKey" \ -p 3003:3003 core-connector:latest
NOTE: keep the values in double quotes (") and scape any special character (\@).
PS: would be good keep the main instructions only here at template README to avoid its deprecation and to spread a lot of instructions pieces across all the core connector repos.
As a general purpose:
- As a Java Maven project, run the Maven command to generate the Java JAR file. It will compile the Rest DSL routers and models files.
mvn clean install
- Run the Java application. Specify the required system properties values.
java \
-Dml-conn.outbound.host="http://localhost:4001" \
-Ddfsp.host="https://dfsp/api" \
-Ddfsp.username="user" \
-Ddfsp.password="pass" \
-Ddfsp.tenant-id="id" \
-Ddfsp.channel-id="channelId" \
-Ddfsp.api-key="apiKey" \
-jar ./core-connector/target/core-connector-{{version}}.jar
Or to run the Java application from Intellij IDE, add the configuration for Run/Debug
.
TODO:
add new instructions how to run application from Intellij
add new instructions how to debug application from Intellij
add new instructions to cover patterns followed for the Core Connector development (Apache Camel, Datasonnet, Logging, etc...)
TODO: add details about application.yaml
NOTE: Each Core Connector project has its own set of credentials to be declared at runtime. So please check the repo has the additional information on this regard.
:warning: IMPORTANT: don't commit the application.yaml
containing sensitive information. application.yaml
.
$ java -Dml-conn.outbound.host=http://simulator:4001 -jar ./core-connector/core-connector-{{version}}.jar
TODO: add instructions how to update api.yaml
To enable backend connection test, run mojaloop-simulator-backend
before run connector.
$ docker run --rm -p 3000:3000 mojaloop-simulator-backend:latest
TODO: describe the Dockerfile variables usage
To build a new Docker image based on Dockerfile.
$ docker build -t mojaloop-core-connector:latest .
To run the Docker image:
docker run -p 3000:3000 -p 8080:8080 -t core-connector
To run the Docker image overwriting the application properties, set the environment variable as per listed into Dockerfile. The pattern to set Docker environment variables is CAPITAL_UNDERSCORE.
$ docker run --rm -e BACKEND_ENDPOINT=http://simulator:3000 -e MLCONN_OUTBOUND_ENDPOINT=http://simulator:3003 -p 3002:3002 mojaloop-simulator-core-connector:latest
To run the Integration Tests (run mvn clean install under core-connector folder first):
mvn -P docker-it clean install
To enable testing Prometheus client implementation in local environment, follow the below steps.
- Create a folder for Prometheus Docker volume
$ mkdir -p /docker/prometheus
- Under created folder, create a
prometheus.yml
configuration file with below content.
global:
scrape_interval: 10s # By default, scrape targets every 15 seconds.
evaluation_interval: 10s # By default, evaluate rules every 15 seconds.
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:7001'] # Might be the local IP
- Start Docker container binding configuration file.
docker run -d -p9090:9090 --name prometheus -v /path/to/file/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
NOTE: Running above command once it is possible do docker stop prometheus
to disable service and docker start prometheus
to enable it again.
Rather than that add --rm
flag to destroy Docker container whenever it is stopped.
-
Once Docker is running, access Prometheus by browser in localhost:9090. Check configurations is like expected in /config and /targets.
-
In Graph page, Expression field, add the metric key expected to monitor according set for application client and press Execute.
NOTE: The list of metric keys can be found metrics_path
set under scrape_configs
of prometheus.yml
file (Ex: localhost:7001/metrics).
To enable Prometheus metrics it is following steps and samples from prometheus/client_java.
The types of metrics enabled are Counter and Histogram and the default port set for exporter is 7001,
but it can be changed in application.yml
property server.metrics.port
.
TODO: add screenshots
Before release a new version of core connector project, it is good setup CircleCI pipeline.
There is a template of config.yml
file under .circleci
directory. Usually it doesn't require any change.
The existing configuration file sets to:
- Package a JAR file by Maven build.
- Build a Docker container image based on project Dockerfile for the application JAR file.
- Push the Docker container image to its repository.
- Trigger all the above steps only on Git tag (GitHub Release). Follow the below steps to setup the CircleCI pipeline:
- Access the organization CircleCI. Use Log In with GitHub option on Sing In.
- On Projects menu, look for the core connector project and press to Set Up Project.
- On select a
config.yml
file dialog box, check the option for existent.circleci/config.yml
in repo and sign it tomaster
branch. Press to continue to the project dashboard. - Press the Project Settings button and select Environment Variables menu. Press to Import Variables, select any other core connector project and confirm the import action.
- Now the CircleCI pipeline should be able to be triggered once GitHub project receive a new release/tag.
-
Ensure the Dockerfile is up-to-date.
-
The project has a CircleCI pipeline that will take care to publish the docker file to the registry.
This Pipeline is triggered on Gittag
moment, so go ahead use the Releases option on GitHub to create the new release version which will generate the tag. Keep the Semantic Versioning format creating a new tag, likevM.m.p
(Ex: v1.0.0 - the v MUST TO BE lowercase). -
Check CircleCI pipeline ran successfully on CircleCI. PS: going to be updated to GitHub Actions
-
Check the image was published properly into DockerHub.
Prerequisites
- Access to GitLab project.
TODO: add screenshots
The PM4ML environments are deployed by GitLab pipelines. To be able to update and run it, check which is the proper GitLab project for the core connector is going to be updated.
-
Edit k3s-sync-dir/static.pm4ml.values.yaml file and look for
mojaloop-core-connector
entry. Updateimage
andenv
entries according the core connector project you want to deploy. -
Commit to
master
branch or open aMerge Request
to request for change approval. -
Go to
CI/CD
menu and selectPipelines
. Usually the first pipeline is about the last change done. Select it and press play to runInstall PM4MLs
job underDeploy PM4MLs
pipeline. -
Check the pipeline finish success and proceed to validation step.
Prerequisites
- aws cli
- kubectl
- Postman
- Access to the S3 Bucket of the environment
- Sync the S3 Bucket files
rm -rf ./mmd_thitsaworks_uat_s3; \
mkdir ./mmd_thitsaworks_uat_s3; \
aws s3 sync s3://pre-mojaloop-state/thitsaworks-uat/ ./mmd_thitsaworks_uat_s3
- Connect to the environment VPN with the provided Wireguard profile file and check the status of connection.
sudo wg-quick up ./thitsa-uat.conf
sudo wg
- Export the KUBECONFIG to the downloaded config file from S3 Bucket to enable connect to the pods.
export KUBECONFIG=path/to/mmd_thitsaworks_uat_s3/kubeconfig
- List the pods from proper namespace to get the core connector pod name.
kubectl get pods -n thitsaworks
- Open the core connector logs to follow the requests messages.
kubectl logs -f -n thitsaworks thitsaworks-mojaloop-core-connector-7fb769b4bc-qlbkf
-
Perform the requests against core connector environment URL and check the results (follow the logs if necessary).
-
Disconnect from the environment VPN and check the connection gone (empty command result).
sudo wg-quick down ./thitsa-uat.conf
sudo wg
For version update, please follow the (Semver)[https://semver.org/#semantic-versioning-200] pattern.