Building a bridge service
Date | Responsible | Changes |
---|---|---|
August 23, 2024 | @Željko Rumenjak | Initial version |
January 15, 2025 | @Jakov Krolo | • Code updated so it works with newest SDKs. • CLI commands updated to align with the newest CLI version. • Highlighted all references to imaginary “mint” bank in the document, to better show where customer-specific values go. • Removed the “Proxying ledger requests” section, until there is a stable best practice available for public. |
Introduction
This tutorial shows you how to connect a bank to a cloud based ACH network built using the Minka Ledger. For this purpose we will be implementing a bridge service which connects the bank’s core systems with the cloud Ledger.
A bridge is a two way connector between payment networks. It is a service which securely connects two payment networks and serves as a translator between protocols.
We will be using the prebuilt template in order to make the integration process faster and smoother.
Like always, when building a production-ready service, follow your usual best practices on application design and security.
The code is written using TypeScript, but you can also use JavaScript.
Quick Start
Prerequisites
We will need the following tools in order to develop and run the demo. You may already have some of them installed from the Making cross-ledger payments.
This tutorial is an extension of the Making cross-ledger payments. If you have not already, please complete it first to get familiar with basic Ledger concepts. The following steps assume that you already have a bank setup which is done as part of the previous tutorial.
Ledger instance
We will need a cloud ledger instance to work with.
Node.js and npm
https://nodejs.org/en/download/
Minka CLI tools
https://www.npmjs.com/package/@minka/cli
After installing Node.js and npm, you can install Minka CLI tools by running the following command.
This tutorial is still in experimental phase, it is using a pre-release version of some Minka libraries and the CLI tool. Please install the alpha
version of the CLI to follow it:
npm install -g @minka/cli@alpha
Docker
https://docs.docker.com/get-docker/
Creating a project
The quickest way to start working on a new integration is to setup a new project using the Minka CLI:
We have now setup a new bridge project which is a great starting point for building integrations between payment networks.
The project already uses many best practices to handle complex issues like asynchronous processing, retries, idempotency and it persists all requests to facilitate easy reconciliation and auditability.
The service we have created defines all API endpoints required to connect to a remote ledger and a mock banking core implementation to demonstrates how to connect with your own banking core or payment system.
Using this code is not a requirement to connect to ledger, but it does simplify the process and solves a lot of additional issues like reconciliation and recovery from errors. These and similar issues usually become apparent after going live, which creates additional costs and results in bad user experience.
The bridge code is open-sourced and you can modify any part of it in order to adapt it to your own needs.
Running a local bridge
We can run the service we have crated by going into the newly created directory and running the following commands:
CLI starts a local server and registers it with ledger automatically by creating or updating a bridge record.
Leave the bridge running and continue the tutorial in a new terminal.
If you followed the previous tutorials your bridge should already be assigned to your bank wallet. If not please check Integrating with an RTP rail tutorial to set this up.
Assigning a bridge to a wallet declares to the ledger that each balance movement related to this wallet needs external confirmation.
After making this change, ledger is going to contact our bridge to confirm any debit or credit related to our bank wallet. We will see how this works in the next chapter.
Processing payments
Now we have everything running and we can try to send a first payment intent to our bridge. We can trigger a credit on our local bridge by sending funds to our wallet:
Ledger detects that a target wallet of this balance movement has a bridge assigned and it sends a prepare credit request to it. We can see this in the log of our bridge:
As confirmation, bridge sends a signature to ledger that contains a unique coreId
(transaction reference) of the operation performed by the bridge.
The entire payment intent payload is delivered to the bridge as well for verification purposes.
This request is part of a two phase commit protocol that ledger uses in order to ensure that all participants in a distributed transaction correctly perform their responsibilities. Confirmations sent to ledger must be a proof made with a private key registered with the bridge and contain a transaction reference of the operation performed by the bridge as evidence.
Errors are handled in a similar way, a signature with error details must be sent to the ledger. In case the bridge is down, ledger will retry requests.
Each request sent by the ledger has a unique id which serves as an idempotency token to prevent double operations. This id is sent in the handle
field of the incoming request.
We will explore the two phase commit protocol in more detail in the next chapters.
Bridge project structure
Bridge SDK automatically solves for us scheduling, communication with ledger, data persistence, idempotency, auditability and retries. We only need to implement adapters to perform required operations in our banking core systems.
Most of the complexity related to communication with ledger is handled by @minka/bridge-sdk
and @minka/ledger-sdk
libraries which are provided and maintained by the ledger core team. These libraries are already installed and configured in our project.
The project file structure is shown below:
The project is very simple because it only contains code which is custom for our integration, everything that is reusable and generic is already included in the libraries provided by the ledger team.
Bridge adapters
Most of the work we need to do is related to the adapters
directory. This directory contains custom adapters which map two phase commit operations to banking systems. As you can see this directory contains two files, one for credits and one for debits.
Each adapter has three functions, prepare
, commit
, and abort
which we can use to run necessary validations and create required transactions in core banking systems as a response to payment intents from ledger. These functions only need to return the final results of the operation, was it successful or not, and the bridge-sdk
is going to send all the required proofs to ledger to properly record this result.
It is enough to implement those two files and we will have a working integration with ledger. There is also a README.md
file in this folder, which contains a more detailed explanation related to adapters.
Adding README.md
files is a general pattern the project follows to make it easier to have most common answers immediately available together with the code.
Core SDK
Core SDK directory contains a mock implementation of an in-memory banking core. This code is added to the project by default in order to better show how to connect with a banking core system.
You can safely remove this code from the project and replace it with SDKs that connect to your banking systems. It is added only for demonstration purposes.
Config
.env
file is generated automatically by the CLI when setting up the project. It uses default values for DB connection, signers provided to the CLI when creating the project, etc.
config.ts
file loads and validated the .env
file. Please review this config file and update the values as needed. Protect private keys and DB passwords using best practices you usually follow when deploying services within your organization.
.env
file contains sensitive data like keys and passwords and it should never be committed to version control
main.ts
is the entry point of our bridge service. This file bootstraps the whole service and starts it. Use this file to register additional adapters, routes, and modify any other setup values you may need to change.
The service consists of two main components which are bootstrapped in main.ts
, these are a server
and a processor
.
The server is an Express app which exposes REST APIs, you can configure the server like this:
Processors are background workers which enable an asynchronous processing model. For example, credits and debits are asynchronous operations, so adapters for those operations can be registered when bootstrapping processors:
You can learn more about the bridge architecture and configuration in the README.md
file of the project.
Two phase commit protocol
Bridge serves as a two way connector between payment systems. All ledger features are available as REST APIs, an industry standard for interoperability between systems.
Bridge exposes two phase commit protocol APIs to enable delivery of ledger events to the bridge. The main purpose of these endpoints is to perform operations on the bank side in response to events happening outside of the bank. For example crediting user accounts as a response to an incoming payment.
Two phase commit protocol is a protocol used in distributed database systems to achieve atomicity across multiple nodes involved in a transaction. We are using this protocol in order to ensure that multiple financial systems process transactions consistently.
Confirmations sent to ledger must be proofs made with a private key of a participant and contain a transaction reference of the operation performed by the participant as evidence. Proofs guarantee a very high level of security in the system, they allow us to store information about initiating participants together with transactions, which makes the entire system completely auditable and guarantees non-repudiation of all transactions.
Recording transaction references from external systems ensures that we can achieve completely automated reconciliation process. All references are available in the system and cross-referencing of operations is done by id, removing guesswork and avoiding the need for various heuristics.
Errors are handled in a similar way, a proof with error details must be sent to the ledger. In case the bridge is down, ledger will retry requests.
Each request sent by the ledger has a unique id which serves as an idempotency token to prevent double operations. This id is sent in the handle
field of incoming requests.
Properly implemented retry logic is very important to ensure that the two phase commit protocol works reliably. This is why the bridge project comes with a robust scheduler component that handles idempotency and retries out of the box.
The protocol has two phases: a prepare phase and a commit phase. The prepare phase should validate and ensure that an operation can be executed, but an operation isn’t considered completed until a commit or abort request is received from the ledger. If a participant confirmed that a prepare was successful, it must ensure that a commit of this operation cannot fail. Any potential failure must be resolved by the participant.
This protocol allows the ledger to coordinate transactions between multiple participants, it asks all the participants to prepare the operation and commits it if all participants prepared successfully. If some participants fail, ledger will send an abort operation instead to make sure that everyone performs a rollback.
To summarize, two things are important to understand about two phase commit protocol:
- Prepare requests are not final, they can still be reverted by the ledger
- If a commit of a prepared request is later sent by the ledger, it must succeed. Commit after a successful prepare must not fail, ledger will retry it until it succeeds.
Banks can implement the prepare phase in various ways. It can be a reservation of funds in the banking core or a transaction that could be reversed in case the entry is aborted.
Below is a summary of REST APIs that bridge exposes in order to implement the two phase commit protocol:
-
POST /v2/credits
Called during two-phase commit when Bridge needs to prepare a credit Entry. The Bridge should check if the account exists, is active and do other checks necessary to ensure the intent can proceed.
-
POST /v2/credits/:handle/commit
Called during two-phase commit when Bridge needs to commit a credit Entry. This request must succeed and is considered successful even if there is an error while processing. In that case, the Bank needs to fix the issue manually.
-
POST /v2/credits/:handle/abort
Called during two-phase commit when Bridge needs to abort a credit Entry. This request gets called if there is an issue while processing the intent. It can not fail just like commit and must completely reverse the intent.
-
POST /v2/debits
Called during two-phase commit when Bridge needs to prepare a debit Entry. Same as for credit, but needs to also hold or reserve the necessary funds on source account.
-
POST /v2/debits/:handle/commit
Called during two-phase commit when Bridge needs to commit a debit Entry. Same as for credit.
-
POST /v2/debits/:handle/abort
Called during two-phase commit when Bridge needs to abort a debit Entry. Same as for credit, but potentially also needs to release the funds.
Bridge project exposes adapters which allow us to provide custom logic required to process two phase commit messages in banking core systems. We will implement those adapters in the following chapters.
Preparing credits
We can now start implementing the two phase commit protocol adapters in our bridge. We will use the mock bank SDK that comes with the project by default to simulate a transactional banking system we are connecting with ledger.
Prepare credit is the first bridge API endpoint that is called when an incoming payment intent arrives to our bank. In this handler we need to validate the incoming payment intent data and our target client to make sure a target account exists and that it is able to receive funds.
Banks may of course decide to run fraud checks and any number of other checks at this point.
The default prepare credit implementation that comes with our starter project always confirms the operation. Let’s look at the code of the default credit adapter:
As you can see, the adapter has a function for each operation and it always returns a successful result for now.
Let’s do some basic intent verification and start using the bankSdk
that comes with the project to verify the target account before preparing a credit:
Bridge SDK performs technical validations related to consistency of received data. It will make sure that we receive a valid prepare credit calls only, it will also verify that the incoming data is signed by the public key of the ledger configured in the .env
to avoid man in the middle or similar attacks.
The code above demonstrates various validations that could be made at when preparing a credit. Prepare credit operation usually involves only validations, since we don’t want to credit user accounts until the transaction is finalized. We show how to validating incoming data and also how you could call external systems to do additional checks like account status verifications, etc.
Our simulated banking system has several preconfigured accounts, we can use these for testing:
We can send a payment intent to an inactive account to see that our validations work:
In our bridge console we should see that the intent fails:
Committing credits
Next, we can implement our commit credit function which is called when a payment intent is confirmed by all participants.
Commit status is final, when we receive a commit credit call we know it cannot be reversed, so it is safe to release the funds to our user at this point. Bridge SDK is going to make sure this prepare commit matches a prepare debit that was already performed by us, this means we don’t have to do the same validations we did in the prepare call, and we can also avoid many technical validations related to data consistency.
Let’s add our commit credit handler now:
We don’t have any special error handling in this implementation because our mock bank core is very simple and it shouldn’t be possible for errors to happen. Bridge SDK is going to map all unrecognized errors to bridge.unexpected-error
by default.
In a production version of a bridge we would have to properly map our internal errors to ledger errors.
Commit phase is not allowed to fail, participants have to implement a process to identify any potential failed operations and manually resolve them.
We can now test our new handler by creating a successful payment intent:
In our bridge console we should see that a new transaction is created:
Aborting credits
Abort operations are used to notify participants that a payment intent has been aborted. Participants should use this operation to clean up and rollback any actions they may have performed in their system.
We only performed validations in our prepare credit handler, which means there is nothing we need to do in this handler.
We can leave it as it is now, since it already confirms that the operation was successfully done in the default implementation.
Preparing debits
Debit operations usually happen when a payment intent is initiated by us. There are also some new use cases that can be supported using the debit operation, like third party payment initiation, request flow and direct debits.
In this tutorial we are going to focus only on payment intents already initiated by us. We can check that we have created an incoming intent by validating the proofs from that intent. We know that our private key is securely stored only in our system, so it is not possible for anyone else to use it. We can verify this by using the ledgerSdk
:
Third party initiation can be implemented in a similar way. We can have a list of authorized public keys and allow debits from payment intents initiated only by those keys.
It is important to reserve user funds in the prepare debit phase to avoid issues with insufficient balances in the commit phase. The main difference between credits and debits is that transactions happen in the commit phase in case of credits, while for debits they happen in the prepare phase.
Let’s put all this together with our standard validations from the credit adapter to get our prepare debit handler:
To test our new adapter we can create an intent from an account that exists in our system. It is important to use our bridge signer to create this intent, otherwise it is going to be rejected by our bridge:
In our bridge console we should see a prepare debit request:
Committing debits
Since we already processed our debit transaction in the prepare debit phase, there is nothing we need to do in the commit debit phase. We can leave this handler to automatically confirm all debit commits that were prepared by us.
Banks may use this endpoint to record final transaction statuses, notify users, or for other bookkeeping activities.
Aborting debits
In debit abort we have to reverse a transaction we have made previously in the prepare. Abort handlers must not fail, so we need to make sure that we can recover reliably from any potential technical errors in the system.
It is recommended to implement retries in case of failures in the core banking services. Bridge scheduler service can be used for this purpose since it allows us to reschedule jobs for later.
Our abort debit handler is relatively simple, it just needs to perform an opposite action of the one from prepare debit:
Retry strategy implemented above is very simplistic, when building a production-ready service we would implement a more advanced strategy, for example exponential backoff with a limited number of retries.
Conclusion
The bridge we have built here should give as a good understanding of the two phase commit protocol and the ledger integration process in general. We have a working solution which synchronizes two ledgers reliably, a cloud ledger instance and our banking core.
Learn more regarding all use cases and detailed flows that are supported in our technical documentation.
The code shared here is open-sourced and you can use it freely. Of course this is not a final, production ready solution. After this, you need to adapt the solution to your specific needs, secure it properly and host it.
Additionally, our open-source bridge SDKs and samples give you the tools necessary to build an entire integration, but you will still need to implement many things for this integration to be ready for production.
All the data for reconciliation is available in the system, but it doesn’t provide actual reconciliation reports. The system is built to be scalable by using async processing, but you will have to run performance tests and configure the solution properly for production. You need to securely store secrets in your infrastructure, build observability, monitoring, notifications, and many other things that you always have to do when deploying a new service.
Minka also provides a payments hub solution, which is a cloud bridge that solves all of the issues mentioned above. In case you are interested in this solution instead of building everything on your own please contact your sales representative to learn more.