Skip to content

Testing

esthermakaay edited this page Aug 31, 2023 · 22 revisions

This page contains public information on testing in EWC. Please do not edit without consulting and informing Task 4.5.

Contents

Introduction

This information is provided at the beginning of testing in EWC. We will adapt and extend it as we go along. This means that not all areas are covered yet, mostly because we haven’t had the time yet, but also because we want to test the early set up and be flexible about changing it as we go along.

In EWC there will be a lot of testing and piloting. We will be testing technology, compliance to standards and interoperability, functionality and safety. This will be done across the ecosystem – against platforms, validators/simulators but mainly against each-other.

We will actively test most of the roles in the ARF ecosystem. image

In the call for the LSPs and the proposal, it is mentioned that services must eventually be deployed in pre-production. All services will start as test-services and then gradually move on to pre-production level. At this moment we only use a test/POC-setup with test-data.

All operational participants will be involved in executing testing. The organisation for this is part of several different Tasks and Work Packages (see information on this on EWC NextCloud).

We roughly have 4 types of testing in EWC with different purposes:

  1. Connectivity-testing Can we reach each other? This test verifies that every service is reachable and is the first step in “activating” a service. It will run continuously throughout the project and involve all operational participants.
  2. Interoperability-testing
    Verify that our services work together with the wallet. This will test the implementation of the protocols and standards in the various services when executing the scenarios. Next to basic functionalities, testing will be based on the business scenarios from WP2 and WP3 and cover all requirements from the ARF (Type I) and additional requirements from the EWC scenarios (Type II).
  3. End-user piloting
    Do end-users understand what we built and how to handle our services? We will test various scenarios and transactions in 3 planned phases, based on maturity.
  4. Security and privacy assessments We need to ensure quality, compliance and protection on our services.

To facilitate the start of testing, we will begin with a few simple overviews of services involved. These will be elaborated and/or replaced by other tools later on in the pilot (e.g. trust anchors, trust lists, key management).

EWC Service Catalogue

The EWC Service Catalogue (on EWC NextCloud) provides an overview of the operational services that are capable of testing in EWC and are operational on the endpoint-response (connectivity testing). There are separate sheets for the various operational roles.

RPs / Verifiers – all participants in the role of Relying Party (operating a service for verifying attestations).
Issuers – all participants who have a service that issues attestations (credentials, attributes) to the wallets. Note that virtually all issuing-services will use some form of authentication and therefore should support the basic verification services.
Signing – all participants who will provide a signing service (either to/from the wallet or initiated by an RP). We don’t expect the signing services to be operational early on, but signing services will use some form of authentication and therefore should support the basic verification services.
Wallets – all participants who provide an operational wallet.

Details to be provided in the Service Catalogue for RPs-Verifiers/Issuers/Signing:

  • Service endpoint URL: the URL of the service endpoint. This service must respond to a public ping endpoint according to the specifications in connectivity testing.
  • Short service description: a brief description or the name of the service, avoiding version numbers and such (those should be on the index-page of the ping-endpoint-response, or the information should be linked to from there).
  • Organisation name: the name of the (participant) organisation operating this service.
  • Issues Contact GitHub Account: the GitHub account of the person handling (or triaging) the issues for your service.
  • Capabilities: the date when the capabilities (standards/protocols) are supported by this service.

Additional details to be provided in the Service Catalogue for Wallets:

  • Download URL or App-name and App-store: the location where a (test-)wallet instance can be retrieved.
  • Endpoint URL (if applicable): the URL of a (test-)wallet endpoint. This service must respond to a public ping endpoint according to the specifications in connectivity testing.
  • Short description: a brief description on the (test-)wallet at the given location/endpoint.

EWC Test Catalogue

The EWC Test Catalogue (on EWC NextCloud) provides an overview of all the operational interop-test-scenarios and the services that are (or can be) involved in testing them. There are different overviews per sheet:

Test Scenarios – All test scenarios with a short description, a link to the test scenario document, the status (Active, Closed, Design) and the linked test cases.
Test Cases – All test cases, their status (Active, Closed, Design), a link to the test case template and the roles involved in the test case.
Test Services – All services involved in a test scenario (and in which roles). Services can be listed multiple time (separate lines for each test scenario they engage in).

A test scenario is a document that describes the interoperability test scenario and provides information on the rationale (what is being tested and why). It contains information on who is eligible for testing (which types of participants / which roles) and lists the test cases involved in the scenario.

A test case is a template describing the test, instructions, test-steps and pass/fail criteria. It lists all the services/participants engaged in the test case. This template is also used for reporting back on executed tests.

Reporting And Handling Issues

EWC uses a GitHub Project for Testing. This is a private repository. If you’re added to the EWC-consortium on GitHub, you can access it. Every operational participant must have 1 or more persons to manage issues. These must be added to the EWC-consortium on Github. This will be done when you onboard your service for testing and it is added to the EWC Service Catalogue.

  • Persons from your organisation who will be handling test-issues reported by others on your services. (List 1 person per service in the EWC Service Catalogue for this. If more are involved, that person can re-assign the issues to their colleagues.)
  • Persons from your organisation who will be reporting test-issues found when testing against other services and handle any responses on these reported issues.

Before reporting any issues on interoperability tests, you must run a connectivity test to ensure the cause is not in the connectivity-domain.

Issues in GitHub can be placed in various swim-lanes (descriptions present in GitHub):

  • Open issues: new issues that are reported, but not yet being worked on.
  • In progress: issues that have been picked up by the assignee and are being worked on.
  • Waiting for confirmation: issues that have been resolved and now re-assigned to the original reporter (or another relevant party) for confirmation of fix.
  • Closed: after resolving of the issue has been confirmed, the issue is moved here.

There are various labels available to label the issues. These can be changed according to our needs.

The GitHub project provides a number of views that can be changed upon request:

  • Board-view – swimlanes for status of issue
  • List-view – spreadsheet-type view of all issues
  • Sorted by labels – showing the number of issues per label and allow filtering per label
  • Sorted by Assignees – showing the number of issue per assignee and allow filtering per assignee

Connectivity Testing

Can we reach each other? The main goal of connectivity testing is to verify that every service is reachable. All operational participants will need to support and run this test.

Every service will need to respond to a simple API-call. This is the first step in “activating” a service. It will run continuously throughout the project.

Lots of issues arise from (changes in) network-setup. Firewalls, load-balancers and routers can cause errors in transactions. Making some service-calls simply not go through (or only half of the time, or completely skewed). This goes both for the internal (organisational) network and external networks, such as the internet.

Issues with reachability will impair (and often obfuscate) all other testing. They’re not always recognised properly, causing a lot of time and energy wasted on troubleshooting the wrong kind of issues. This is why we will test reachability as a first and foremost aspect throughout our project.

Reachability depends on location (logical and geographical). Being reachable by one or more participants does not mean your service is reachable by all participants. That’s why we will test this with the full EWC ecosystem. All public interfaces used in our consortium will be involved. This means every public interface providing a service to end-users with EUDI Wallets: Issuers (PID, ODI, EAA, QEAA, signing), Verifiers. But also every public interface providing a service to other participants/actors within EWC:

  • Anyone offering validation/verification services, revocation services, et cetera
  • Trusted list providers, Authentic sources (t.b.d.), Wallet providers
  • This excludes interfaces for “outsourced” services that are used by intermediaries acting on behalf of a participant/actor

Every service will need to respond to a simple API-call (HTTP-request should give a 200/OK response on a /.ping page containing basic information on the service). Specifications are provided in the EWC ping-endpoint-response document document.

This is the first step in “activating” a service within EWC, whatever that service will become, together with a registration in the EWC Service Catalogue.

These tests do not cover any of the standards or protocols for EUDIW-transactions. Basic connection is the only target. Everything else will be covered by interoperability- and end-user-testing.

Connectivity testing is an ongoing effort, requiring continuous testing and resolving, because changes in network-setup are happening all the time (not only within organisations, also in the broader scope of the internet). So something that is reachable one moment, might not be the next moment. This is also why we urge to check connectivity first when encountering issues in other tests.

We ask participants to set up automated tests on connectivity: create an automated job, sending out API-calls and checking if they receive a 200/OK response. We will provide a list of endpoints for this purpose. Task 4.5 will discuss on how to handle the reports from these jobs (we might be able to facilitate an overview that shows latest status).

Errors in connectivity can be reported as an issue in GitHub and assigned to the contact for the unreachable service. Resolving of these issues is often a joint effort, because the cause may lie in different parts of the network and it’s not always the parties experiencing the issue that are capable of resolving it.

Interoperability Testing

The main goal of interoperability testing is to verify that our services work together with the wallets. Do the interfaces understand each other? Is the interpretation of the standards aligned?

Interoperability testing is based on the business scenarios from WP2 and WP3 and will cover the requirements from the ARF (Type I) and the additional requirements from the EWC scenarios (Type II).

We will start interoperability testing with basic scenarios focusing on verifying and issuing and then later on extend these to the specific scenarios (and the attributes and roles needed in there) from the use cases. The people from T1.3 (innovation management) and T4.1 (standards and specifications) will ensure that all ARF-required functionality will be part of the interoperability tests.

For all interoperability tests, there will be test-scenarios provided. Each test will provide information on who is eligible for testing (which types of participants) and have separate test cases with a template per type of participant (per role).

Interoperability tests are run on all services (all public interfaces) involved in the EWC scenarios. Not all participants will run all tests, because they might not (want to) support all scenarios.

image

All business scenarios from WP2 and WP3 will be refined to a functional level, showing the detailed flow of data between the various actors. These flows then will be mapped to interface-specifications that show the involved standards and specifications. (This will be done by “Scenario Refinement” and “Technical Design”, 2 working groups that will be set up by T1.3 in Q3 2023). This scenario- and technical refinement will be used to create test scenarios. When there are multiple candidates and not enough resources to work on all of them, priorities will be set together with the domain-leads from travel, payments and organisational identity.

Each test scenario will provide information on the rationale (what is being tested and why) and describe pass/fail criteria. Issues encountered while testing can be reported in GitHub.

Tests will be run based on self-assessment: participants use the test scenarios and report on their results. Since all tests will include multiple participants, lying about these results will not be very fruitful.

Most tests will be based on services that, once started, will run throughout the course of our project. This means that participants who start implementation a bit later on, will still be able to catch up on the earlier interoperability tests.

Almost all test scenarios will be updated and extended to incorporate new (releases of) standards and requirements. This means that participants will run the same tests more than once (e.g. to test with a different protocol or a different set of attributes).

The specifications for our ecosystem are not as mature as we’d like. The standards involved are freshly released, but not yet put to the test of true large-scale deployments. The first MVP of the reference implementation wallet will be released later this year (2023). And everything is very much in flux: the ARF and the standards and specifications are being updated and extended while we work on the implementations. This will make interoperability testing a real challenge.

To create some clarity within EWC, there will be 3-monthly focus sessions where the scope for interoperability is being set for the next period. This includes the scenarios we will start building and testing and the standards (and the specific parts of those standards) that are involved.

This doesn’t mean that participants cannot test other scenarios or standards than provided by the consortium focus. If you started a bit later with implementation, you might need to start with test scenarios from an earlier phase. And if you’re implementing faster than most others, of course you should try and find some other front-runners to test your implementation against. T4.5 will support you as much as we can, but the consortium focus is what sets our priority to work on.

End-User Piloting

Do end-users understand what we built and want them to do? End-user piloting is about functionality and UX-testing. The technical part should work (ensured by interoperability testing) and raise no issues.

We will pilot selected sets of scenarios and transactions in 3 planned phases:

  • Implementation basics – 500 friends and fans
  • Implementation extended – 500+ selected participants
  • Pre-production environment - > 10.000 “random” users

This will be guided by a dedicated test-portal, providing help and instructions to the test-users.

Technical support will be provided from interoperability-testing. T4.5 will use the test-scenarios and test-cases to run end-to-end tests prior to the end-user piloting.

Gen will lead end-user piloting from WP2.

External / Cross-Domain Testing

Do our solutions work outside of EWC? Does our approach align with the other LSPs? Or are we working in a bubble?

We need to test against other LSPs on different levels. The most important ones being the interpretation of standards and specifications. Because most standards we’re using are new-born 1.0 versions (or not even that) that will have many different ways of being interpreted and handled. Even without being aware of it, we will create our own local dialects.

This is why we should open up interoperability-testing towards other LSPs early on, to test basic functionalities and align on interpretation. This is a topic of conversation between the different LSPs, managed by the coordination team and supported by WP5 (who are liaising to external stakeholders).

Security and Privacy Assessments

In order to ensure quality, compliance and protection, as well as provide risk mitigation, we will set up various assessments.

The main areas that we will focus on are:

  • Security: Vulnerability Scans, Security Checks
  • Privacy: Privacy Impact Assessment (PIA)
  • Legal compliance: Conformity Assessments

Standards and frameworks for these will be provided by WP1:

  • Task 1.2: risk mitigation
  • Task 1.4: legal and ethics

Clone this wiki locally