Skip to content

Meeting minutes #13

@divyapathak24

Description

@divyapathak24

Meeting minutes: 22nd Jan 2024
Current paper draft evaluation section:

  1. Attack instances may have very few attack flows and may not have successfully tricked the FRR system, so instead of focusing on FPR/FNR instance-wise, we should go ahead with trace-wise misclassification rates. Different prefixes have different behaviours, so the number of attack flows required will also differ, so it's better to not focus on instance-wise misclassification rates.
    so, the story in the paper should come out clearly.
  2. Discuss the challenges in identifying different normal categories (normal, cong/pkt losses, LFs) in CAIDA datasets. How we derive thresholds from LF has to come out clearly (data-driven learning approach).

Statistical methods of deriving thresholds:

  • Currently, we analyzed the data manually using the LF instances and derived thresholds. Now, next is to learn these thresholds using chi-squared tests and validate on the observed distributions

P4-anomaly paper vs our work:

  • No instrumentation to the P4-code is required for our work

ML methods for k-best features and detection:

  1. Input: Flows (Instances) + feature vector (FS, FD, RTT) to an ML feature selection model
  2. Output: k-best features
  3. Train an ML model with the k-best feature (supervised binary (0/1) labelled data, normal as 0 and attack as 1 (successful attack).Get the trained model deployed on the DP

Papers to read:

  1. FlowLens: Collect features in DP, ML algo in CP
  2. Netbeacon: Collect features in DP and ML algo in DP (collects bins after every epoch)
  3. Our work: Collect features in DP and detection in DP as well (to discuss with Sankalp)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions