Skip to content

VED-956: Audit table update#1026

Merged
dlzhry2nhs merged 29 commits intomasterfrom
VED-956-Audit-Table-Update
Dec 8, 2025
Merged

VED-956: Audit table update#1026
dlzhry2nhs merged 29 commits intomasterfrom
VED-956-Audit-Table-Update

Conversation

@JamesW1-NHS
Copy link
Copy Markdown
Contributor

Summary

  • Routine Change
  • ❗ Breaking Change
  • 🤖 Operational or Infrastructure Change
  • ✨ New Feature
  • ⚠️ Potential issues that might be caused by this change

Add any other relevant notes or explanations here. Remove this line if you have nothing to add.

Reviews Required

  • Dev
  • Test
  • Tech Author
  • Product Owner

Review Checklist

ℹ️ This section is to be filled in by the reviewer.

  • I have reviewed the changes in this PR and they fill all of the acceptance criteria of the ticket.
  • If there were infrastructure, operational, or build changes, I have made sure there is sufficient evidence that the changes will work.
  • If there were changes that are outside of the regular release processes e.g. account infrastructure to setup, manual setup for external API integrations, secrets to set, then I have checked that the developer has flagged this to the Tech Lead as release steps.
  • I have checked that no Personal Identifiable Data (PID) is logged as part of the changes.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Dec 3, 2025

This branch is working on a ticket in the NHS England VED JIRA Project. Here's a handy link to the ticket:

VED-956

Copy link
Copy Markdown
Contributor

@dlzhry2nhs dlzhry2nhs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good - just a handful of comments. Only really key thing in there is to avoid putting functionality in the log decorators.

I tested manually and all looked okay, albeit with a very small file. One thing to consider, as I am sure this will be tested with larger files, is the eventually consistent nature of DynamoDB. Hopefully it will be fast enough, but there is a risk that we may calculate the success count while DDB is still working through out increment requests for failures.

One potential mitigation would be, had we followed the architecture diagram then we could have incremented failures slightly earlier in the process in the recordforwarder. I am okay with the proposed approach in this PR.

Another potential option could be to use a ConsistentRead, though I don't know enough to say if that will work for sure. Will leave it up to you whether you want to make any adjustments, or keep this up the sleeve in case there are any test findings.

self.mock_dynamodb_client.get_item.return_value = {"Item": {"message_id": {"S": test_message_id}}}

self.assertIsNone(audit_table.get_record_count_by_message_id(test_message_id))

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine with these tests as-is. We are using moto in the relevant integration tests i.e. test_ack_processor.

I think mocking is fine for this unit test.

message_id = base_log_data.get("message_id")
set_audit_table_ingestion_complete(file_key, message_id, complete_time)

generate_and_send_logs(STREAM_NAME, complete_time, base_log_data, additional_log_data)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://screening-discovery.slack.com/archives/C08AVB8MEBG/p1764772944077509 Sorry I should have pinged you about this. Please see the 4th to last message. We should include the full payload from complete_batch_file_process in the additional logs, and also add the success and failure count into this payload. It already records the record count.

I can handle updating ITOC with the changes they will need to make to their queries.

@JamesW1-NHS
Copy link
Copy Markdown
Contributor Author

Looks good - just a handful of comments. Only really key thing in there is to avoid putting functionality in the log decorators.

I tested manually and all looked okay, albeit with a very small file. One thing to consider, as I am sure this will be tested with larger files, is the eventually consistent nature of DynamoDB. Hopefully it will be fast enough, but there is a risk that we may calculate the success count while DDB is still working through out increment requests for failures.

One potential mitigation would be, had we followed the architecture diagram then we could have incremented failures slightly earlier in the process in the recordforwarder. I am okay with the proposed approach in this PR.

Another potential option could be to use a ConsistentRead, though I don't know enough to say if that will work for sure. Will leave it up to you whether you want to make any adjustments, or keep this up the sleeve in case there are any test findings.

Noted. If it proves too slow then we ought to make a new ticket to refactor the failures into the recordforwarder.

@sonarqubecloud
Copy link
Copy Markdown

sonarqubecloud bot commented Dec 8, 2025

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants