-
Notifications
You must be signed in to change notification settings - Fork 1
Description
- Feature Name: (fill me in with a unique ident, my_awesome_feature)
- Start Date: (fill me in with today's date, YYYY-MM-DD)
- RFC PR: (leave this empty)
- Hathor Issue: (leave this empty)
- Author: (fill this with Your Name your@email, Other Author other@email)
Summary
Wallet-service dockerized implementation.
Motivation
Deploying the wallet-service api on AWS Lambda makes the deployment process and debugging issues on the running service very hard.
By changing the handlers and code to run as part of a web framework like express or fastify we can create a docker image with the wallet-service api that can be deployed on our infrastructure.
This gives us more control over the production and dev environment as well as being able to better investigate issues on the service due to the plethora of tooling for a dockerized environment.
Guide-level explanation
The dockerized implementation will be made in typescript using the express web framework.
We will first map the changes required to provide the same functionality as the current service then explain the implementation requirements.
Routing and caching
Some functionalities that are currently handled by the serverless framework and described in the serverless.yaml file will need to be implemented, for instance routing and a cache strategy.
The express router can be decoupled from the handlers and implemented on its own file so we can achieve something similar as the serverless file where we configure all http endpoints on a singular file.
Caching can be more challenging, we could implement our own cache layer with a key-value store or use a cache solution.
Both of these requires the service to be behind a reverse proxy that chooses to serve either the cache or call the api.
Some cache solutions:
- varnish-cache reverse proxy
- nginx cache (guide)
Both are production ready and can be deployed on kubernetes.
Handler implementation
The serverless.yaml file dictates the configuration of the api gateway, which routes to send to which handler function and so forth.
Meaning the handler function of the api only has to receive an event and return an event with the required data for the api gateway to form a response.
To convert the lambda handler to an express handler we need to:
- change references to the
eventobject to use the providedrequest- e.g.
event.queryStringParametersshould bereq.query
- e.g.
- change the response event to use the
responseobject- e.g.
res.send(data)
- e.g.
Database connection
We need to improve the database connection management since we will require a central pool of connections that all methods will use instead of each handler declaring its own database connection.
The queries themselves are pure SQL so we can be sure that they will work as long as we maintain the database (which maintains the SQL dialect) and the engine (which will keep the method and return types from the queries).
Web-socket events
The current web-socket implementation is highly dependent on the AWS API Gateway web-socket model, we can use some of the methods but the best solution is to remove the vendor lock-in and design a scalable solution for the web-socket events.
The daemon identifies that a new transaction arrived will send the data to a queue so it can be processed by a consumer on the wallet-service connected to the database, it will check which wallets are interested in this particular event (by checking the addresses involved and the wallet they belong to) and publish the data on a redis instance using pub/sub.
This message will be read by the web-socket server instance, this service can be scaled horizontally and have multiple instances, each instance will subscribe to all events.
When a wallet connects to the web-socket server it will issue a command that shows its wallet-id.
The server can use the id to only send events that are meant for the wallet.
The message queue and consumer are meant to offload the processing of these events from the daemon which is processing all transactions, the redis can be configured to handle a high throughput of events and the web-socket server instance can be scaled with the number of active users.
This provides a highly scalable alternative to the current system while maintaining the same service for the clients, so we do not have to make changes on the client code.
Wallet creator service
When a client tries to create a new wallet the api sends the request to a message queue.
With serverless we had a handler that would be invoked for each event on queue, but we need to create an alternative queue worker that will dequeue events and create the wallets.
Infrastructure
The "default" infrastructure should be able to run only on docker containers, this makes development easier and deployment on kubernetes or any provider possible.
Services and containers required:
- Database - MySQL instance or cluster
- Redis - instance or cluster
- Message Queue - instance or cluster
- A single RabbitMQ instance can handle multiple queues.
- A single RabbitMQ instance may be able to handle our production load.
- Queue workers
- Wallet creator: as many containers as needed
- Tx processor for WS: single container if event order needs to be kept.
- API - on container but can be scaled as needed
- Web-socket server - on container but can be scaled as needed
- Daemon - single container connected to Hathor node, does not scale
With this the minimal deployment of the wallet-service has 8 containers, 2 databases, 1 message queue, 3 workers (2 queue and the daemon) and 2 services.
All of these can be configured on a single docker-compose file for development and can be deployed separately in production so each part can scale as needed or be exchanged for a service.
For instance, if we want to keep the message queue in SQS or use Google's Pub/Sub queue we could also deploy the queue workers with serverless so they can be deployed along the queues.
Reference-level explanation
We will create 3 new workspaces on the wallet-service monorepo:
- wallet-service-api
- Express implementation of the current wallet-service lambda handlers
- ws-server
- Web-socket server for clients to connect and receive events
- queue-worker
- The implementation of both queue workers
Workspace: wallet-service-api
The endpoints required to be converted to the express handler implementation:
/wallet/init/wallet/auth/wallet/status/wallet/addresses/check_mine/wallet/addresses/wallet/address/info/wallet/addresses/new/wallet/utxos/wallet/tx_outputs/wallet/balances/wallet/tokens/wallet/tokens/:token_id/details/version/wallet/history/tx/proposal/tx/proposal/:txProposalId/tx/proposal/:txProposalId/wallet/proxy/nano_contract/state/wallet/proxy/nano_contract/history/wallet/proxy/nano_contract/blueprint/info/auth/token/metrics/wallet/push/register/wallet/push/update/wallet/push/unregister/:device_id/wallet/transactions/:tx_id/wallet/proxy/transactions/:tx_id/wallet/proxy/transactions/:tx_id/confirmation_data/wallet/proxy/graphviz/neighbours
Some lambdas are meant to be invoked via aws sdk, meaning these are private functions that should not be made public.
To achieve a similar functionality we will create an /admin route for each that can be optionally added to the express app, meaning that the production environment will not have these apis but we can deploy a private instance of the api service with admin routes for our use.
/admin/getLatestBlock- Get the service's current best block
/admin/miners- Gets a list of all miners on the database.
/admin/totalSupply- Gets the calculated sum of utxos on the database, excluding the burned ones
/admin/processNewNFT- makes final validations and updates NFT metadata on explorer-service
Workspace: ws-server
The implementation is very straightforward, a websocket server with pub/sub pattern that will send events related to the connected wallets.
The websocket api will be maintained so we can keep compatibility with current client implementations.
The server will respond only 2 types of messages from the client:
{ "action": "join", "id": "<wallet-id>" }- Subscribe to all events related to the wallet-id
{ "action": "ping" }- Respond with pong message
Workspace: queue-worker
We will create a simple worker that connects to a message queue, either with amqp-lib or aws-sdk (for sqs) and process them as needed.
Currently we require 2 queue workers:
- Wallet creator
- Tx event processor
We can start a single instance with both workers or add a command line argument to start a specific one, this way we can scale each worker in production separately.
The implementation of the workers will be an adaptation of the current handlers for the same tasks.
Drawbacks
- Will require changing the api part of the codebase
- Idle cost of the service will be higher
- Requires us to manage more infrastructure
Rationale and alternatives
- More control of the environment and no vendor lock
- A docker service can be deployed on any infrastructure
- Issues are more easily reproduced, so easier to investigate
- Deploys will be less complicated
- More development tools to investigate issues
- More production tools to check that the service is working as intended
- e.g. Prometheus, sentry
- Makes development and bug fixes proactive instead of reactive
- Running a local environment will make development easier
- This will also make it easier to bring more devs to contribute on this project
- The project may be more stable in the future due to this change
- Will still be able to deploy the service as a lambda with wsgi lamdba entrypoints
Task breakdown
- Convert wallet-service implementation (4dd)
- Convert wallet-service tests (2dd)
- Implement websocket server (2dd)
- Implement queue workers (2dd)
- Orchestrate and run minimal infrastructure in dev environment (1dd)
- Plan production infrastructure (3dd)
Metadata
Metadata
Assignees
Labels
Type
Projects
Status