Newcastle University | MSc Advanced Computer Science
Module: Risk and Trust Management (CSC8216)
Academic Year: 2025/2026
This coursework presents a comprehensive cybersecurity risk assessment and privacy impact analysis for a university's online admission application system. The analysis covers risk identification, AI ethics, GDPR compliance, and data protection strategies for a complex multi-jurisdictional system processing sensitive applicant data.
Scenario: A university-wide online admission platform serving both local and international (EU) applicants with the following architecture:
- Frontend: React-based web interface
- Backend: Node.js with Express
- Database: PostgreSQL for applicant data
- Cloud Storage: AWS S3 for document management
- Email Service: AWS SES for communications
- Security: OAuth 2.0, MFA, WAF, SIEM monitoring
This assessment demonstrates expertise in:
✅ Risk Assessment: Identifying and evaluating cybersecurity threats
✅ Risk Management: Creating structured risk registers with mitigation controls
✅ AI Ethics: Analyzing algorithmic bias and transparency issues
✅ Privacy Impact Assessment (PIA): GDPR-compliant privacy analysis
✅ Legal Compliance: Understanding GDPR applicability and requirements
✅ Data Protection: Comparing anonymization vs. pseudonymization techniques
✅ Stakeholder Analysis: Identifying affected parties and impacts
Objective: Identify potential cybersecurity risks and create a formal risk register
Key Risks Analyzed:
-
Unauthorized Access
- Threat: Hackers exploiting vulnerabilities to access sensitive data
- Likelihood: Medium
- Impact: High
- Controls: OAuth 2.0 with MFA, Application Firewall
-
Data Leakage
- Threat: Confidential documents accessed or leaked unethically
- Likelihood: Medium
- Impact: High
- Controls: Secure AWS bucket policies, encryption, strict access control
Risk Categories Considered:
- Hacking attempts and external threats
- System vulnerabilities (web applications, cloud infrastructure)
- User negligence (weak passwords, information sharing)
Mitigation Strategies:
- Multi-factor authentication enforcement
- End-to-end encryption
- Web Application Firewall (WAF)
- Regular security audits
Objective: Assess ethical and cybersecurity risks of AI-based applicant screening
Context: The university is considering implementing an AI/ML system to automatically evaluate applicant profiles based on transcripts, test scores, and personal statements.
Analysis:
- AI systems trained on historical admission data may perpetuate existing biases
- Risks disadvantaging applicants from diverse socio-economic or cultural backgrounds
- Can result in systemic discrimination against minorities or underrepresented groups
- Evidence from research shows bias in AI hiring and admission tools
Real-World Impact:
- Reduced opportunities for diverse candidates
- Legal and reputational risks for the institution
- Ethical concerns about fairness and equity
Mitigation Strategies:
- Diverse Training Data: Ensure datasets represent all demographic groups
- Inclusive Design: Involve ethicists and community representatives in AI development
- Regular Audits: Continuous monitoring for discriminatory patterns
- Hybrid Decision-Making: Human review of all AI recommendations
- Bias Testing: Regular algorithmic fairness assessments
Analysis:
- AI systems operate as "black boxes" with opaque decision-making processes
- Applicants and staff unable to understand how scores are generated
- Raises accountability and trust issues
- Violates principles of fairness and due process
Mitigation Strategies:
- Explainable AI (XAI): Implement interpretable models with clear feature importance
- Documentation: Clear explanation of how AI components affect scoring
- Human Oversight: Final decisions made by admission committee
- Transparency Policy: Public disclosure of AI usage in admissions
- Appeal Process: Mechanism for applicants to contest AI-influenced decisions
Objective: Conduct a structured PIA for the admission system processing sensitive data across multiple jurisdictions
Risk Description:
- External hackers or malicious insiders accessing personal identifiers, academic records, and recommendation letters
- Data stored in cloud infrastructure (AWS S3) presents additional attack surface
- Potential for data misuse, identity theft, or service disruption
Affected Stakeholders:
- Applicants: Personal data compromised
- University Staff: Reputational damage, legal liability
- Faculty Members: Trust erosion
- IT Administrators: Incident response burden
Mitigation Strategies:
- Role-Based Access Control (RBAC): Limit data access to authorized personnel only
- Multi-Factor Authentication (MFA): Already implemented, enforce rigorously
- SIEM Monitoring: Real-time detection and response to anomalies
- Regular Access Audits: Periodic review of who has access to what data
- Encryption: Data encrypted at rest (database, cloud storage) and in transit (TLS/SSL)
Risk Description:
- System processes EU applicant data, requiring GDPR compliance
- Non-compliance leads to significant fines (up to €20M or 4% of global turnover)
- Legal penalties, reputational damage, and loss of trust
Affected Stakeholders:
- Applicants: Rights violations (right to erasure, access, rectification)
- University: Legal and financial penalties
- IT Department: Compliance implementation responsibility
Mitigation Strategies:
- Regular PIAs: Ongoing privacy impact assessments
- Data Protection Audits: Ensure continuous GDPR compliance
- Data Minimization: Collect only necessary information
- Retention Policies: Delete data when no longer needed
- Staff Training: Educate team on GDPR requirements and data protection principles
- Data Protection Officer (DPO): Appoint dedicated compliance officer
Question: Does GDPR apply to an Asia-based university offering admissions to EU students?
Answer: YES - GDPR applies
Legal Basis:
- GDPR Article 3(2) - Territorial Scope
- GDPR applies to processing of personal data of individuals in the EU, regardless of the data controller's location
- The university offers localized EU websites (e.g., uni.com/fr for France)
- This constitutes "offering services" to individuals in the EU
- Processing EU applicant data = mandatory GDPR compliance
Key Points:
- Location of university (Asia) is irrelevant
- Targeting EU data subjects triggers GDPR
- Localized websites are evidence of targeting
- Full compliance required including DPO appointment, PIAs, breach notifications
Question: Which technique is recommended for protecting applicant personal data?
Recommendation: Pseudonymization (preferred over Anonymization)
Rationale:
Pseudonymization (GDPR Article 4(5)):
- ✅ Replaces identifying information with pseudonyms/tokens
- ✅ Maintains data utility for admissions processing
- ✅ Allows data to be re-identified with additional information (kept separately)
- ✅ Reduces privacy risk while preserving functionality
- ✅ Still subject to GDPR but with relaxed requirements for certain processing
- ✅ Enables data analytics while protecting privacy
Anonymization:
- ❌ Completely removes personally identifiable information
- ❌ Data cannot be linked back to individuals
- ❌ Falls outside GDPR scope (not personal data anymore)
- ❌ Major Drawback: Loses ability to process admissions, contact applicants, or verify identities
- ❌ Not practical for operational admission system
Use Case Alignment: For an active admission system that needs to:
- Contact applicants with decisions
- Verify identities and credentials
- Track application status
- Process enrollments
→ Pseudonymization maintains necessary functionality while enhancing privacy
- NIST Cybersecurity Framework - Identify, Protect, Detect, Respond, Recover
- ISO 31000 - Risk management principles and guidelines
- Risk Register Methodology - Structured risk documentation
- GDPR (General Data Protection Regulation) - EU data protection law
- Privacy by Design - Embedding privacy into system architecture
- Privacy Impact Assessment (PIA) - Systematic privacy risk evaluation
- Data Protection Principles - Lawfulness, fairness, transparency, minimization
- Fairness - Avoiding bias and discrimination
- Transparency - Explainable decision-making
- Accountability - Clear responsibility for AI decisions
- Human Oversight - Maintaining human judgment in critical decisions
- Technical Controls: Encryption, MFA, firewalls, SIEM
- Administrative Controls: Policies, training, audits
- Physical Controls: Access restrictions, monitoring
This analysis is supported by authoritative sources:
Cybersecurity:
- NCSC.GOV.UK - Cyber Risk Fundamentals
- Virtual Cyber Labs - Web Application Vulnerabilities
AI Ethics:
- Frontiers in Psychology - AI Transparency and Accountability
- eLearning Industry - Mitigating AI Bias Strategies
- IEEE - Algorithmic Fairness in ML Systems
Privacy & GDPR:
- European Union Agency for Fundamental Rights - GDPR in Practice
- University of York - GDPR Compliant Research Guidelines
- Official GDPR Regulation Text (Articles 3(2), 4(5))
Access Control:
- Pathlock - Role-Based Access Control (RBAC) Guide
- ISO 27001 - Information Security Management
Data Protection:
- GRC World Forums - Anonymization vs. Pseudonymization
- University of Maine System - GDPR Compliance Guide
csc8216-risk-trust-management/
├── README.md # This file
├── Risk_and_Trust_Management_Report.pdf # Complete analysis report
├── CSC8216_Assessment_Brief.pdf # Original assignment specification
├── docs/
│ ├── risk-register.md # Detailed risk documentation
│ ├── pia-framework.md # PIA methodology
│ └── references.md # Full bibliography
└── LICENSE
✅ Cybersecurity Risk Assessment - Threat identification and evaluation
✅ Privacy Impact Assessment (PIA) - Structured privacy analysis
✅ GDPR Compliance Analysis - Legal framework application
✅ Cloud Security - AWS security best practices
✅ Access Control Design - RBAC, MFA implementation
✅ Stakeholder Analysis - Identifying affected parties and impacts
✅ Risk Quantification - Likelihood and impact assessment
✅ Mitigation Strategy Design - Practical security controls
✅ Legal Analysis - GDPR article interpretation and application
✅ Ethical Reasoning - AI fairness and bias analysis
✅ Literature Review - Evidence-based analysis
✅ Regulatory Research - GDPR, NIST, ISO standards
✅ Case Study Analysis - Real-world AI bias examples
✅ Academic Writing - Structured technical documentation
This coursework successfully demonstrates:
- ✅ Risk Control & Management - Comprehensive risk register with practical controls
- ✅ Legal & Ethical Analysis - GDPR compliance, AI ethics, stakeholder impacts
- ✅ Security Assessment Methods - Applied to complex, multi-jurisdictional system
- ✅ Privacy Engineering - PIA methodology, pseudonymization techniques
- ✅ Professional Communication - Clear, structured technical writing
This analysis is directly applicable to:
Industries:
- Higher Education: University admission systems globally
- Healthcare: Patient data management and AI diagnosis
- Financial Services: Customer onboarding and fraud detection
- Government: Public service delivery and citizen data protection
- Technology: SaaS platforms serving international users
Roles:
- Information Security Analyst
- Privacy Officer / Data Protection Officer (DPO)
- Risk Management Consultant
- AI Ethics Specialist
- Compliance Manager (GDPR, HIPAA, SOC 2)
- Cloud Security Architect
Module: CSC8216 - Risk and Trust Management
Assessment Type: Case Study Analysis
Word Limit: 1000 words (±20%)
Weighting: 100% of module grade
Marking Criteria Coverage:
- ✅ Risk identification and register quality
- ✅ Depth of risk analysis (ethical and technical)
- ✅ Mitigation strategy justification
- ✅ Privacy risk comprehensiveness
- ✅ Stakeholder analysis clarity
- ✅ GDPR legal analysis with article citations
- ✅ Data protection technique evaluation
Potential areas for deeper analysis:
- Quantitative Risk Assessment: Calculating Annual Loss Expectancy (ALE)
- Threat Modeling: STRIDE/DREAD frameworks
- Incident Response Planning: Breach response procedures
- Third-Party Risk: Vendor assessment (AWS, email providers)
- Business Continuity: Disaster recovery and resilience planning
- Advanced AI Auditing: Fairness metrics (demographic parity, equalized odds)
This project showcases:
- Analytical Thinking: Breaking down complex security and privacy challenges
- Legal Acumen: Understanding and applying GDPR regulations
- Ethical Awareness: Addressing AI bias and fairness concerns
- Technical Knowledge: Cloud security, encryption, access controls
- Communication: Translating technical concepts for stakeholders
- Problem-Solving: Designing practical mitigation strategies
- Information Security Analyst
- Privacy Consultant / DPO
- Risk Management Specialist
- Compliance Officer
- AI Ethics Researcher
- Cloud Security Engineer
- GRC (Governance, Risk, Compliance) Analyst
Author: Aniket Nalawade
Student ID: 250535354
Institution: Newcastle University
Program: MSc Advanced Computer Science
Module: CSC8216 - Risk and Trust Management
Academic Year: 2025/2026
Complete Analysis Report: Risk_and_Trust_Management_Report.pdf
The report provides detailed analysis including:
- Comprehensive risk assessment with structured risk register
- In-depth AI ethics analysis with real-world examples
- Full Privacy Impact Assessment (PIA) with stakeholder mapping
- GDPR legal analysis with article citations
- Data protection technique comparison
- Evidence-based recommendations with academic references
cybersecurity risk-management GDPR privacy AI-ethics data-protection compliance PIA cloud-security RBAC encryption algorithmic-bias pseudonymization stakeholder-analysis newcastle-university msc-computer-science
Built with 🔒 Security and 🛡️ Privacy in Mind
Newcastle University | School of Computing