You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+29-47Lines changed: 29 additions & 47 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,54 +7,36 @@ All test and analysis code for our paper can be found in the `_hp` directory.
7
7
Our modified version of the wptserve HTTP server implementation can be found in `tools/serve` and `tools/wptserve`. All other directories are untouched and required for `wptserve` to run, we removed the other WPT test directories for better clarity.
8
8
9
9
## Setup
10
+
- Create a fresh Ubuntu22 container/VM: `lxc launch ubuntu:22.04 <name>` and connect to it `lxc exec <name> bash`
11
+
- Switch to the ubuntu user: `su - ubuntu`
12
+
- Clone this repository: `[email protected]:header-testing/header-testing.git`
13
+
- Run the setup file: `cd header-testing/_hp`, `./setup.bash` (reopen all terminals or run `source ~/.bashrc` afterwards)
14
+
- Configure DB settings in [config.json](_hp/config.json)
15
+
- Setup the database: `cd _hp/tools && poetry run python models.py`
16
+
- Setup certs: either remove `.demo` from the files in `_hp/tools/certs/` to use self-signed certs or add your own certs here
10
17
11
18
## Run Instructions
19
+
- Always start the WPT server first (from the top-most folder): `poetry run -C _hp python wpt serve --config _hp/wpt-config.json`
20
+
- Create the basic and parsing responses: Run `cd _hp/tools && poetry run create_responses.py` (basic), run `cd analysis` and execute `response_header_generation.ipynb` to generate the parsing responses.
21
+
- Manually check if the server and the tests are working: Visit http://sub.headers.websec.saarland:80/_hp/tests/framing.sub.html
22
+
- Automatic testrunners:
23
+
-`cd _hp/tools/crawler`
24
+
- Android: `poetry run android_intent.py` (Additional config required)
25
+
- MacOS/Ubuntu: `poetry run desktop_selenium.py`
26
+
- iPadOS/iOS: `poetry run desktop_selenium.py ----gen_page_runner --page_runner_json urls.json --max_urls_until_restart 10000"`, then visit the URLs in that file manually
27
+
- Analysis: Open `_hp/tools/analysis/main_analysis_desktop_basic+parsing.ipynb` (Also contains the mobile analysis)
12
28
13
29
## Inventory
14
-
-`_hp`: All test and analysis code for the paper:
15
-
-
16
-
-`tools`: Contains modified `wptserve`
17
-
- Other directories are used by `wptserve` internally but are not modified
18
-
19
-
20
-
21
-
- Setup:
22
-
- Create a fresh Ubuntu22 container/VM: `lxc launch ubuntu:22.04 <name>` and connect to it `lxc exec <name> bash`
23
-
- Switch to the ubuntu user: `su - ubuntu`
24
-
- Clone this repository: `[email protected]:header-testing/header-testing.git`
25
-
- Run the setup file: `cd wpt/_hp`, `./setup.bash` (reopen all terminals or run `source ~/.bashrc` afterwards)
26
-
- Configure DB settings in [config.json](config.json)
27
-
- Setup the database: `cd _hp/tools && poetry run python models.py`
28
-
- Setup certs: either remove `.demo` from the files in `_hp/tools/certs/` to use self-signed certs or add the real certs there
29
-
- Run:
30
-
- Start the WPT Server (from the top-most folder): `poetry run -C _hp python wpt serve --config _hp/wpt-config.json`
31
-
- Automatic: Start the testrunners, e.g., `poetry run desktop_selenium.py`
- "explain" reasons (keep in mind that other features such as blocked mixed content and CORB might be responsible for differences and not different parsing of the security header)
37
-
- ...
38
-
- Inventory (of _hp):
39
-
- wpt-config.json: Ports, Domains, Certs, ... (Subdomains currently hardcoded in tools/serve/serve.py)
40
-
- common/: Shared non-js files for the tests (images, html, ...)
41
-
- resources/: Shared javascript files for the tests (testharness, save_results, ...)
42
-
- server/
43
-
- responses.py: Serves the correct responses from the db (responses.py?resp_id=<int>&feature_group=<str>)
44
-
- store_results.py: Stores the test results in the db (expects JSON with {tests: [...], browser=browser_id})
45
-
- tests/
46
-
- One file for each feature group to test
47
-
- Create one testcase for everything one wants to test
48
-
- Then run these for all corresponding responses and relevant origin configurations
0 commit comments