You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+17-18Lines changed: 17 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,14 @@
1
1

2
2
# 🛡️ Sentinel Kit: The Simplified Platform for Incident Response (SOC & DFIR)
3
3
4
-
## WARNING: This project is currently in an early stage of development. Not all components have been ported to this repository, and the features are not yet stable enough for production use.
4
+
$\color{red}{\textsf{**WARNING**: This project is currently in an early stage of development. Not all components have been ported to this repository, and the features are not yet stable enough for production use.}}$
5
+
6
+
The features already available online are documented [here](docs/index.md).
5
7
---
6
8
7
9
**Sentinel Kit** is a comprehensive Docker stack designed to provide **Digital Forensics and Incident Response (DFIR)** and **Security Operations Center (SOC)** capabilities with unparalleled deployment simplicity.
8
10
9
-
Ideal for **situational monitoring** or **rapid security incident response**, this integrated platform enables collection, analysis, detection, and immediate response to threats.
11
+
Ideal for **situational monitoring** or **small-scale security incident response**, this integrated platform enables collection, analysis, detection, and immediate response to threats.
10
12
11
13
---
12
14
@@ -91,33 +93,34 @@ All of this can be edited in `.env` file
*`config/sigma_ruleset`: sigma rules used on elasticsearch ingested logs
108
110
*`config/yara_ruleset`: yara rules used on `data/yara_triage_data` folder or by *sentinel-kit_datamonitor* agent
109
111
110
112
## 📖 Data
111
113
112
114
Persistent data are located in the `data/` folder:
113
115
114
-
*`data/caddy_logs`: Store the caddy server access & error logs
115
-
*`data/ftp_data`: Store file uploaded on the SFTP server
116
-
*`data/grafana`: Contains a persistence of your grafana profile if you want to make your own dashboard and customizations
117
-
*`data/kibana`: Kibana user customizations
118
-
*`data/log_ingest_data`: Is designed to forward logs if you don't want to use fluentbit HTTP forwarder
119
-
* `data/mysql_data`: Constains a persistence of the web backend database
120
-
* `data/yara_triage_data`: is used to automatically scan any file placed in this folder
116
+
*`data/caddy_logs`: Store the caddy server access & error logs.
117
+
*`data/fluentbit_db`: fluentbit ingest database (to avoid indexing same data several times).
118
+
*`data/ftp_data`: Store file uploaded on the SFTP server.
119
+
*`data/grafana`: Contains a persistence of your grafana profile if you want to make your own dashboard and customizations.
120
+
*`data/kibana`: Kibana user customizations (dashboard, config...).
121
+
*`data/log_ingest_data`: Is designed to forward logs if you don't want to use fluentbit HTTP forwarder.
122
+
* `data/mysql_data`: Constains a persistence of the web backend database.
123
+
* `data/yara_triage_data`: is used to automatically scan any file placed in this folder.
121
124
122
125
---
123
126
@@ -129,11 +132,7 @@ To stop and remove the containers, networks, and volumes created by Docker Compo
129
132
docker-compose down -v
130
133
```
131
134
132
-
If you want to erase all user data:
133
-
* remove the __content__ of every folder inside `data/`
134
-
* remove the __content__ of `config/certificates/` in caddy_server, elasticsearch and jwt
135
-
* remove the __content__ of `config/grafana`
136
-
* finally, rebuild the stack with the following command:
135
+
If you want to erase all user data, and start from a fresh and clean installation, there is a `clean-user-data` sh or powershell (depending on your OS) to help you erasing all personal data. Then, you can rebuild the whole stack with:
Although the code in this stack is designed to be secure in terms of authentication and client/server exchanges (strong identification, certificates, JWT, etc.), it is your responsibility to limit the server's exposure (flow filtering, whitelisting, etc.). No filtering mechanism is provided in the project, so configure your flows to limit exposure as follows:
5
+
6
+

7
+
8
+
**Exposed services:**
9
+
|**Web API**| Used for clients<->server communications and admin actions over the web interface |
10
+
|**SFTP Server**| Secure file/evidence upload |
11
+
12
+
optional - for interoperability purpose with a forwarder like logstash / winlogbeat / fluentbit etc... It could also be done over Web API (see documentation)
13
+
| **Syslog forwarder** | Fluentbit log forwarder to elasticsearch
14
+
15
+
**Admin access**
16
+
|**Web front end**| Access to the admin application (agents control, log analysis...) |
| **Kibana** | Kibana complete access (for advanced usage only. Standard features are directly implemented in the admin web interface)
22
+
23
+
## Prerequisites
24
+
This project is designed to be deployed in minutes using Docker Compose.
25
+
***Docker**
26
+
***Docker Compose** (or Docker Engine including Compose)
27
+
* Minimum **8 GB of RAM** (essential for Elasticsearch)
28
+
29
+
## 🚀 Let's start Sentinel-Kit
30
+
Everything comes as a full docker-compose stack to avoid configuration and depencies and simplify the deployment.
31
+
32
+
First, clone sentinel kit repository
33
+
```bash
34
+
git clone
35
+
cd sentinel-kit
36
+
```
37
+
38
+
Then, in a local environment, you need to define the following DNS entries:
39
+
```bash
40
+
# OS `hosts` file
41
+
127.0.0.1 sentinel-kit.local
42
+
127.0.0.1 backend.sentinel-kit.local
43
+
127.0.0.1 phpmyadmin.sentinel-kit.local
44
+
127.0.0.1 kibana.sentinel-kit.local
45
+
127.0.0.1 grafana.sentinel-kit.local
46
+
```
47
+
48
+
On advanced configuration, you can configure `config/caddy_server/Caddyfile` if you want to set your own nameserver. If so, you will also need to set your hostname, replacing the original ones in `.env` file. You can find details about advanced configuration [here](02-customize-stack.md).
49
+
50
+
When your DNS configuration is ok. Start the stack with:
51
+
```bash
52
+
docker-compose up -d
53
+
```
54
+
Initial startup could be long as elastic configure two nodes and kibana.
55
+
56
+
Caddy server will generate a complete certification chain for the exposed services (frontend, backend, phpmyadmin, kibana, grafana). If you don't want to always accept untrusted certification chains, add root and intermediate CA to your browser certificates. They are located in ./config/certificates/caddy_server
57
+
58
+
[https://sentinel-kit.local]https://sentinel-kit.local should return the following page:
To stop and remove the containers, networks, and volumes created by Docker Compose:
66
+
67
+
```bash
68
+
docker-compose down -v
69
+
```
70
+
71
+
If you want to erase all user data, and start from a fresh and clean installation, there is a `clean-user-data` sh or powershell (depending on your OS) to help you erasing all personal data. Then, you can rebuild the whole stack with:
Most of the parameters for this stack are defined in the **`.env`** file located at the root directory. These settings are loaded **before the stack starts**. If the stack is already running, you must **restart it** for any changes to take effect.
4
+
5
+
---
6
+
7
+
## ⚙️ Services and Profiles
8
+
9
+
The following configuration block defines the active services using Docker Compose profiles and sets the Elasticsearch cluster mode.
You can remove certain services if you don't need them. Simply remove the corresponding profile from the `COMPOSE_PROFILES` list:
18
+
19
+
* Remove internal-monitoring to disable access to Grafana features.
20
+
* Remove phpmyadmin to disable direct access to the MySQL database.
21
+
* Remove sftp if no external file upload functionality is required.
22
+
23
+
## Elasticsearch Configuration
24
+
25
+
By default, Elasticsearch is configured as a two-node cluster (multi-node). For environments with limited resources, you can switch to a single-node setup by removing the `es-secondary-node` profile and updating `ELASTICSEARCH_CLUSTER_MODE` as shown below:
**__Note:__** Switching between single-node and multi-node can be done at any point during the stack's lifecycle without requiring a complete reinstallation or data loss.
33
+
34
+
### Elasticsearch Memory Limit
35
+
To limit the memory allocated to the Elasticsearch cluster, modify the following variable (default is 4GB):
36
+
```bash
37
+
ELASTICSEARCH_MEMORY_LIMIT=4294967296
38
+
```
39
+
40
+
41
+
🌐 Domain Names
42
+
The hostnames for the exposed services are customizable here. It is mandatory to map these hostnames to the stack's IP address either in your DNS configuration or in your local hosts file (for isolated local installations).
43
+
| Service | Environment Variable | Default Hostname |
For production usage, you can change any of the default credentials below.
53
+
54
+
⚠️ **__Important:__** Once the stack is initialized, changing certain secrets (like database or Elasticsearch credentials) can destabilize services. It is strongly recommended to modify these values only before the initial launch of the stack.
To only stop the stack, without removing existing data, just do the following command:
74
+
```bash
75
+
docker compose down
76
+
```
77
+
78
+
To stop __and remove the containers, networks, and volumes__ created by Docker Compose:
79
+
80
+
```bash
81
+
docker-compose down -v
82
+
```
83
+
84
+
If you want to erase all user data, and start from a fresh and clean installation, there is a `clean-user-data` sh or powershell (depending on your OS) to help you erasing all personal data. Then, you can rebuild the whole stack with:
Once installed and operational, the Sentinel Kit administration interface is accessible by default at https://sentinel-kit.local. No web recording functionality is implemented, separating the functions of the platform administrator and the analyst who operates it.
OTP is directly available if the admin wants to store it. You don't need to comunicate it to the end user, he will access it on the first successful login on the platform.
0 commit comments