Skip to content

Aniket3434/CSC8216-Risk-trust-management

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

Risk and Trust Management Analysis - CSC8216

Newcastle University | MSc Advanced Computer Science
Module: Risk and Trust Management (CSC8216)
Academic Year: 2025/2026

Security GDPR Risk Assessment

📋 Project Overview

This coursework presents a comprehensive cybersecurity risk assessment and privacy impact analysis for a university's online admission application system. The analysis covers risk identification, AI ethics, GDPR compliance, and data protection strategies for a complex multi-jurisdictional system processing sensitive applicant data.

System Under Analysis

Scenario: A university-wide online admission platform serving both local and international (EU) applicants with the following architecture:

  • Frontend: React-based web interface
  • Backend: Node.js with Express
  • Database: PostgreSQL for applicant data
  • Cloud Storage: AWS S3 for document management
  • Email Service: AWS SES for communications
  • Security: OAuth 2.0, MFA, WAF, SIEM monitoring

🎯 Project Objectives

This assessment demonstrates expertise in:

Risk Assessment: Identifying and evaluating cybersecurity threats
Risk Management: Creating structured risk registers with mitigation controls
AI Ethics: Analyzing algorithmic bias and transparency issues
Privacy Impact Assessment (PIA): GDPR-compliant privacy analysis
Legal Compliance: Understanding GDPR applicability and requirements
Data Protection: Comparing anonymization vs. pseudonymization techniques
Stakeholder Analysis: Identifying affected parties and impacts

📊 Analysis Framework

Question 1: Risk Identification & Risk Register

Objective: Identify potential cybersecurity risks and create a formal risk register

Key Risks Analyzed:

  1. Unauthorized Access

    • Threat: Hackers exploiting vulnerabilities to access sensitive data
    • Likelihood: Medium
    • Impact: High
    • Controls: OAuth 2.0 with MFA, Application Firewall
  2. Data Leakage

    • Threat: Confidential documents accessed or leaked unethically
    • Likelihood: Medium
    • Impact: High
    • Controls: Secure AWS bucket policies, encryption, strict access control

Risk Categories Considered:

  • Hacking attempts and external threats
  • System vulnerabilities (web applications, cloud infrastructure)
  • User negligence (weak passwords, information sharing)

Mitigation Strategies:

  • Multi-factor authentication enforcement
  • End-to-end encryption
  • Web Application Firewall (WAF)
  • Regular security audits

Question 2: AI Ethics & Automated Decision-Making Risks

Objective: Assess ethical and cybersecurity risks of AI-based applicant screening

Context: The university is considering implementing an AI/ML system to automatically evaluate applicant profiles based on transcripts, test scores, and personal statements.

Risk 1: Algorithmic Bias

Analysis:

  • AI systems trained on historical admission data may perpetuate existing biases
  • Risks disadvantaging applicants from diverse socio-economic or cultural backgrounds
  • Can result in systemic discrimination against minorities or underrepresented groups
  • Evidence from research shows bias in AI hiring and admission tools

Real-World Impact:

  • Reduced opportunities for diverse candidates
  • Legal and reputational risks for the institution
  • Ethical concerns about fairness and equity

Mitigation Strategies:

  1. Diverse Training Data: Ensure datasets represent all demographic groups
  2. Inclusive Design: Involve ethicists and community representatives in AI development
  3. Regular Audits: Continuous monitoring for discriminatory patterns
  4. Hybrid Decision-Making: Human review of all AI recommendations
  5. Bias Testing: Regular algorithmic fairness assessments

Risk 2: Lack of Transparency

Analysis:

  • AI systems operate as "black boxes" with opaque decision-making processes
  • Applicants and staff unable to understand how scores are generated
  • Raises accountability and trust issues
  • Violates principles of fairness and due process

Mitigation Strategies:

  1. Explainable AI (XAI): Implement interpretable models with clear feature importance
  2. Documentation: Clear explanation of how AI components affect scoring
  3. Human Oversight: Final decisions made by admission committee
  4. Transparency Policy: Public disclosure of AI usage in admissions
  5. Appeal Process: Mechanism for applicants to contest AI-influenced decisions

Question 3: Privacy Impact Assessment (PIA)

Objective: Conduct a structured PIA for the admission system processing sensitive data across multiple jurisdictions

Privacy Risk 1: Unauthorized Access to Sensitive Data

Risk Description:

  • External hackers or malicious insiders accessing personal identifiers, academic records, and recommendation letters
  • Data stored in cloud infrastructure (AWS S3) presents additional attack surface
  • Potential for data misuse, identity theft, or service disruption

Affected Stakeholders:

  • Applicants: Personal data compromised
  • University Staff: Reputational damage, legal liability
  • Faculty Members: Trust erosion
  • IT Administrators: Incident response burden

Mitigation Strategies:

  1. Role-Based Access Control (RBAC): Limit data access to authorized personnel only
  2. Multi-Factor Authentication (MFA): Already implemented, enforce rigorously
  3. SIEM Monitoring: Real-time detection and response to anomalies
  4. Regular Access Audits: Periodic review of who has access to what data
  5. Encryption: Data encrypted at rest (database, cloud storage) and in transit (TLS/SSL)

Privacy Risk 2: Non-Compliance with Data Protection Regulations

Risk Description:

  • System processes EU applicant data, requiring GDPR compliance
  • Non-compliance leads to significant fines (up to €20M or 4% of global turnover)
  • Legal penalties, reputational damage, and loss of trust

Affected Stakeholders:

  • Applicants: Rights violations (right to erasure, access, rectification)
  • University: Legal and financial penalties
  • IT Department: Compliance implementation responsibility

Mitigation Strategies:

  1. Regular PIAs: Ongoing privacy impact assessments
  2. Data Protection Audits: Ensure continuous GDPR compliance
  3. Data Minimization: Collect only necessary information
  4. Retention Policies: Delete data when no longer needed
  5. Staff Training: Educate team on GDPR requirements and data protection principles
  6. Data Protection Officer (DPO): Appoint dedicated compliance officer

Question 4: GDPR Applicability & Data Protection Techniques

Part A: GDPR Applicability Analysis

Question: Does GDPR apply to an Asia-based university offering admissions to EU students?

Answer: YES - GDPR applies

Legal Basis:

  • GDPR Article 3(2) - Territorial Scope
  • GDPR applies to processing of personal data of individuals in the EU, regardless of the data controller's location
  • The university offers localized EU websites (e.g., uni.com/fr for France)
  • This constitutes "offering services" to individuals in the EU
  • Processing EU applicant data = mandatory GDPR compliance

Key Points:

  • Location of university (Asia) is irrelevant
  • Targeting EU data subjects triggers GDPR
  • Localized websites are evidence of targeting
  • Full compliance required including DPO appointment, PIAs, breach notifications

Part B: Anonymization vs. Pseudonymization

Question: Which technique is recommended for protecting applicant personal data?

Recommendation: Pseudonymization (preferred over Anonymization)

Rationale:

Pseudonymization (GDPR Article 4(5)):

  • ✅ Replaces identifying information with pseudonyms/tokens
  • ✅ Maintains data utility for admissions processing
  • ✅ Allows data to be re-identified with additional information (kept separately)
  • ✅ Reduces privacy risk while preserving functionality
  • ✅ Still subject to GDPR but with relaxed requirements for certain processing
  • ✅ Enables data analytics while protecting privacy

Anonymization:

  • ❌ Completely removes personally identifiable information
  • ❌ Data cannot be linked back to individuals
  • ❌ Falls outside GDPR scope (not personal data anymore)
  • Major Drawback: Loses ability to process admissions, contact applicants, or verify identities
  • ❌ Not practical for operational admission system

Use Case Alignment: For an active admission system that needs to:

  • Contact applicants with decisions
  • Verify identities and credentials
  • Track application status
  • Process enrollments

Pseudonymization maintains necessary functionality while enhancing privacy


🔍 Key Concepts & Frameworks Applied

Risk Management Frameworks

  • NIST Cybersecurity Framework - Identify, Protect, Detect, Respond, Recover
  • ISO 31000 - Risk management principles and guidelines
  • Risk Register Methodology - Structured risk documentation

Privacy & Compliance

  • GDPR (General Data Protection Regulation) - EU data protection law
  • Privacy by Design - Embedding privacy into system architecture
  • Privacy Impact Assessment (PIA) - Systematic privacy risk evaluation
  • Data Protection Principles - Lawfulness, fairness, transparency, minimization

AI Ethics Principles

  • Fairness - Avoiding bias and discrimination
  • Transparency - Explainable decision-making
  • Accountability - Clear responsibility for AI decisions
  • Human Oversight - Maintaining human judgment in critical decisions

Security Controls (NIST CSF)

  • Technical Controls: Encryption, MFA, firewalls, SIEM
  • Administrative Controls: Policies, training, audits
  • Physical Controls: Access restrictions, monitoring

📚 Research & References

This analysis is supported by authoritative sources:

Cybersecurity:

  • NCSC.GOV.UK - Cyber Risk Fundamentals
  • Virtual Cyber Labs - Web Application Vulnerabilities

AI Ethics:

  • Frontiers in Psychology - AI Transparency and Accountability
  • eLearning Industry - Mitigating AI Bias Strategies
  • IEEE - Algorithmic Fairness in ML Systems

Privacy & GDPR:

  • European Union Agency for Fundamental Rights - GDPR in Practice
  • University of York - GDPR Compliant Research Guidelines
  • Official GDPR Regulation Text (Articles 3(2), 4(5))

Access Control:

  • Pathlock - Role-Based Access Control (RBAC) Guide
  • ISO 27001 - Information Security Management

Data Protection:

  • GRC World Forums - Anonymization vs. Pseudonymization
  • University of Maine System - GDPR Compliance Guide

📄 Repository Structure

csc8216-risk-trust-management/
├── README.md                                    # This file
├── Risk_and_Trust_Management_Report.pdf         # Complete analysis report
├── CSC8216_Assessment_Brief.pdf                 # Original assignment specification
├── docs/
│   ├── risk-register.md                         # Detailed risk documentation
│   ├── pia-framework.md                         # PIA methodology
│   └── references.md                            # Full bibliography
└── LICENSE

💡 Skills Demonstrated

Technical Skills

Cybersecurity Risk Assessment - Threat identification and evaluation
Privacy Impact Assessment (PIA) - Structured privacy analysis
GDPR Compliance Analysis - Legal framework application
Cloud Security - AWS security best practices
Access Control Design - RBAC, MFA implementation

Analytical Skills

Stakeholder Analysis - Identifying affected parties and impacts
Risk Quantification - Likelihood and impact assessment
Mitigation Strategy Design - Practical security controls
Legal Analysis - GDPR article interpretation and application
Ethical Reasoning - AI fairness and bias analysis

Research Skills

Literature Review - Evidence-based analysis
Regulatory Research - GDPR, NIST, ISO standards
Case Study Analysis - Real-world AI bias examples
Academic Writing - Structured technical documentation


🎓 Learning Outcomes Achieved

This coursework successfully demonstrates:

  1. Risk Control & Management - Comprehensive risk register with practical controls
  2. Legal & Ethical Analysis - GDPR compliance, AI ethics, stakeholder impacts
  3. Security Assessment Methods - Applied to complex, multi-jurisdictional system
  4. Privacy Engineering - PIA methodology, pseudonymization techniques
  5. Professional Communication - Clear, structured technical writing

🔗 Real-World Applications

This analysis is directly applicable to:

Industries:

  • Higher Education: University admission systems globally
  • Healthcare: Patient data management and AI diagnosis
  • Financial Services: Customer onboarding and fraud detection
  • Government: Public service delivery and citizen data protection
  • Technology: SaaS platforms serving international users

Roles:

  • Information Security Analyst
  • Privacy Officer / Data Protection Officer (DPO)
  • Risk Management Consultant
  • AI Ethics Specialist
  • Compliance Manager (GDPR, HIPAA, SOC 2)
  • Cloud Security Architect

📊 Assessment Context

Module: CSC8216 - Risk and Trust Management
Assessment Type: Case Study Analysis
Word Limit: 1000 words (±20%)
Weighting: 100% of module grade

Marking Criteria Coverage:

  • ✅ Risk identification and register quality
  • ✅ Depth of risk analysis (ethical and technical)
  • ✅ Mitigation strategy justification
  • ✅ Privacy risk comprehensiveness
  • ✅ Stakeholder analysis clarity
  • ✅ GDPR legal analysis with article citations
  • ✅ Data protection technique evaluation

🚀 Future Extensions

Potential areas for deeper analysis:

  • Quantitative Risk Assessment: Calculating Annual Loss Expectancy (ALE)
  • Threat Modeling: STRIDE/DREAD frameworks
  • Incident Response Planning: Breach response procedures
  • Third-Party Risk: Vendor assessment (AWS, email providers)
  • Business Continuity: Disaster recovery and resilience planning
  • Advanced AI Auditing: Fairness metrics (demographic parity, equalized odds)

💼 Portfolio Presentation

For Recruiters

This project showcases:

  • Analytical Thinking: Breaking down complex security and privacy challenges
  • Legal Acumen: Understanding and applying GDPR regulations
  • Ethical Awareness: Addressing AI bias and fairness concerns
  • Technical Knowledge: Cloud security, encryption, access controls
  • Communication: Translating technical concepts for stakeholders
  • Problem-Solving: Designing practical mitigation strategies

Relevant to These Roles:

  • Information Security Analyst
  • Privacy Consultant / DPO
  • Risk Management Specialist
  • Compliance Officer
  • AI Ethics Researcher
  • Cloud Security Engineer
  • GRC (Governance, Risk, Compliance) Analyst

📞 About This Work

Author: Aniket Nalawade
Student ID: 250535354
Institution: Newcastle University
Program: MSc Advanced Computer Science
Module: CSC8216 - Risk and Trust Management
Academic Year: 2025/2026


📄 Documentation

Complete Analysis Report: Risk_and_Trust_Management_Report.pdf

The report provides detailed analysis including:

  • Comprehensive risk assessment with structured risk register
  • In-depth AI ethics analysis with real-world examples
  • Full Privacy Impact Assessment (PIA) with stakeholder mapping
  • GDPR legal analysis with article citations
  • Data protection technique comparison
  • Evidence-based recommendations with academic references

🔖 Tags & Keywords

cybersecurity risk-management GDPR privacy AI-ethics data-protection compliance PIA cloud-security RBAC encryption algorithmic-bias pseudonymization stakeholder-analysis newcastle-university msc-computer-science


Built with 🔒 Security and 🛡️ Privacy in Mind
Newcastle University | School of Computing

About

Cybersecurity risk assessment and GDPR compliance analysis for university admission system | Newcastle MSc

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors