StreamX Commerce Accelerator is a project designed to streamline the setup of new e-commerce projects, enabling the rapid creation and deployment of commerce websites. The project structure is modular and organized into specific directories and files to facilitate development, deployment, and maintenance.
This documentation outlines the components of the project, the purpose of each folder, and instructions for local setup and deployment.
In the case of Windows, all the commands outlined in this file should be run inside the Linux VM terminal, which hosts Docker, rather than in native Windows terminals (e.g., CMD or PowerShell). For instance, if Docker is used with WSL, the commands should be executed inside the WSL terminal.
To launch WSL: Press Win + R, type wsl, and hit Enter.
WSL commands run in the linux context.
Therefore, you must check out / git clone this repository via the WSL terminal
to prevent Git from automatically converting newlines in bash scripts (\n) to Windows-style newlines (\r\n),
which could cause unexpected errors while parsing and running the scripts.
If you are running the below commands on Windows, replace image: bitnami/etcd in gateway/local/docker-compose.yml
to image: bitnamilegacy/etcd.
sudo apt update
sudo apt install -y build-essential curl file git /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"Add brew to your shell
echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> ~/.bashrc
eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"Restart your WSL shell or run:
source ~/.bashrcVerify:
brew --version brew update
brew install openjdk@17
brew link --force --overwrite openjdk@17If heavy automation scripts (such as publish-all.sh) become unresponsive or "hang",
it typically indicates that Docker has reached the default resource ceiling allocated by Windows.
You can adjust these settings by creating or editing a global configuration file on your Windows host.
To do so, navigate to your Windows user profile folder by entering %USERPROFILE% in the File Explorer address bar.
Open or create a file named .wslconfig.
Add or update the following section:
[wsl2]
memory=80%
swap=4GB
For the changes to take effect - stop WSL using the wsl --shutdown command, and start wsl again.
As a prerequisite, ensure that you have StreamX CLI installed in latest preview version:
brew upgrade streamx-dev/preview-tap/streamx
brew install streamx-dev/preview-tap/streamxRunning apisix gateway requires also yq, jq and envsubst commands. To install these run:
brew install yq
brew install jq
brew install envsubst-
Start StreamX
Run the StreamX instance (current setup requires
previewversion of StreamX CLI, see prerequisites):streamx run -f ./mesh/mesh.yaml
Note: For local development you can also use mesh-light.yaml which comes with the basic functionality only that allows to run on limited resources.
Alternatively, instead of the
streamx runyou can use thestreamx devcommand. This allows you to view the mesh in graphic mode within the local instance of StreamX Dashboard. -
Run the Proxy
Start the local proxy for serving the website:
./gateway/local/run-apisix.sh
Note: You can run proxy with APISIX Dashboard for easy routes administration by adding
dashboard-enabled=trueparam to above script Note: The started proxy server connects to the network created by thestreamx runcommand. If the running mesh is restarted, the proxy must be restarted too. Otherwise502. Message: Bad Gatewaywill occur when calling endpoints exposed by the proxy. -
Publish All Resources
Use the
publish-allscript to deploy all necessary data to StreamX:./scripts/publish-all.sh
Follow steps described in README.
- Optionally: Append and configure custom hosts in application-cloud.properties:
echo "streamx.accelerator.ingestion.host= streamx.accelerator.web.host=" >> config/application-cloud.properties
Properties:
streamx.accelerator.ingestion.host- Custom StreamX REST Ingestion API host. Default value isingestion.${streamx.accelerator.ip}.nip.io.streamx.accelerator.web.host- Custom StreamX WEB Delivery Service host. Default value isweb.${streamx.accelerator.ip}.nip.io.
- Configure StreamX properties in
.envfile:echo "#%cloud.streamx.accelerator.ip= %cms.streamx.ingestion.auth-token= %pim.streamx.ingestion.auth-token=" > .env
Properties:
%cloud.streamx.accelerator.ip- Kubernetes cluster Load Balancer IP. Uncomment and set this property value ifstreamx.accelerator.ingestion.hostorstreamx.accelerator.web.hostcontains${streamx.accelerator.ip}placeholder.%cms.streamx.ingestion.auth-token- CMS source authentication token. Value should be taken from Kubernetes clustersx-ing-auth-jwt-cmssecret.%pim.streamx.ingestion.auth-token- PIM source authentication token. Value should be taken from Kubernetes clustersx-ing-auth-jwt-pimsecret.
- Deploy Accelerator StreamX Mesh:
export KUBECONFIG=<path_to_kubeconfig> && export QUARKUS_PROFILE=cloud && streamx --accept-license deploy -f mesh/mesh.yaml
Note:
Replace
<path_to_kubeconfig>with path to kubeconfig file stored on your local file system. - Publish all resources
export QUARKUS_PROFILE=cloud,cms && streamx batch publish data export QUARKUS_PROFILE=cloud,pim && streamx stream data data/catalog/products.stream export QUARKUS_PROFILE=cloud,pim && streamx stream data data/catalog/categories.stream
Contains all the sample data needed to be ingested for the website to be functional. It stores:
- Layouts
- Dynamic templates
- Static pages
- Page fragments
- CSS, JS files
- Product and category data
This is the place where we simulate source systems such as CMS, PIM, etc.
Contains the configuration and script for running the proxy that handles the traffic between system components.
Definition of the StreamX Mesh and all its associated configurations and secrets.
Collection of scripts responsible for:
- Ingesting data
- Setting up environments
Contains the data model definition, which serves as the contract between the website and the source systems.
Everything required to create the infrastructure on cloud
As part of Streamx Accelerator we deliver an option to setup initial search via Algolia JS plugin called autocomplete-js.
https://www.algolia.com/doc/ui-libraries/autocomplete/api-reference/autocomplete-js/autocomplete/
It's a simple and isolated plugin with minimal setup. In orderd for it to work we need:
- HTML element with an id of "autocomplete"(make sure to place it within desired area, ex: navigation bar):
<div id="autocomplete"></div>- Import required JS and CSS. For that we have 2 options either reuse minified versions of it from our repo:
<link rel="stylesheet" href="../web-resources/css/algolia.theme.min.css" /> <script src="../web-resources/js/autocomplete-js.js"></script>or directly from CDN:
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@algolia/[email protected]/dist/theme.min.css" integrity="sha256-7xmjOBJDAoCNWP1SMykTUwfikKl5pHkl2apKOyXLqYM=" crossorigin="anonymous"/> <script src="https://cdn.jsdelivr.net/npm/@algolia/[email protected]/dist/umd/index.production.js" integrity="sha256-Aav0vWau7GAZPPaOM/j8Jm5ySx1f4BCIlUFIPyTRkUM=" crossorigin="anonymous"></script>- Attach our custom autocomplete-init.js to initialize it:
<script defer src="../web-resources/js/autocomplete-init.js"></script>Our custom autocomplete-init.js contains all of the required JS for the plugin to work. Alternatively you can follow the instructions from the setup section of autocomplete-js for higher level of control over it:
https://www.algolia.com/doc/ui-libraries/autocomplete/api-reference/autocomplete-js/autocomplete/