Skip to content

Commit 3e4fa67

Browse files
authored
Merge pull request #2355 from hpe-dev-incubator/cms/event/learn-how-ai-hackers-detect-fragility-and-how-to-thwart-them-with-ai-model-resilience
Create Events “learn-how-ai-hackers-detect-fragility-and-how-to-thwart-them-with-ai-model-resilience”
2 parents bef9232 + f38bec2 commit 3e4fa67

File tree

1 file changed

+16
-0
lines changed

1 file changed

+16
-0
lines changed
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
---
2+
title: Learn how AI hackers detect fragility and how to thwart them with AI
3+
model resilience
4+
dateStart: 2024-03-19T23:01:44.318Z
5+
dateEnd: 2024-03-20T22:59:44.363Z
6+
category: Virtual Event
7+
image: /img/event-munch-and-learn-newlogo400x400.png
8+
link: https://hpe.zoom.us/webinar/register/1017085065469/WN_qg26mm3ZQn6Wq3DzwuSapw
9+
width: large
10+
---
11+
## Learn how AI hackers detect fragility and how to thwart them with AI model resilience
12+
13+
March 20, 2024
14+
15+
Learn how AI hackers detect fragility and how to thwart them with AI model resilience
16+
On every lab test, your AI was superhuman. But how will it fare in the real world of smog, smears, and nation state hackers? In this session, we'll explore how AI hackers can measure the fragility of today's AI models, covering the model's vulnerability under real-world conditions across applications of varying data dimensions, from signals to images to videos. We’ll then show how to engineer robustness into models and sketch out tomorrow's AI supply chain where confidence is measurable and the model's perception can be inspected.

0 commit comments

Comments
 (0)