Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
[![Review Assignment Due Date](https://classroom.github.com/assets/deadline-readme-button-22041afd0340ce965d47ae6ef1cefeee28c7c493a6346c4f15d667ab976d596c.svg)](https://classroom.github.com/a/tLTGCA4G)
---
title: "Activity 1 - Hello, Azure AI"
type: lab
Expand Down
10 changes: 10 additions & 0 deletions REFLECTION.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,20 @@ Answer these questions after completing the activity (2-3 sentences each). Conne

Which of the three Azure AI services (OpenAI, Content Safety, Language) surprised you the most? Connect this to something specific you observed during your experiments -- a response you didn't expect, a behavior that seemed too easy or too hard, or a result that made you rethink how the service works.

> The Content Safety service suprised me the most. It's also the most dificult to do well. It has to take into account the nuances of human speech/text patterns and attempt to correctly clasify them based on more rigid rules.

## 2. Lazy Initialization

How would you explain the lazy initialization pattern to a colleague? Why is it used instead of creating clients at the top of the file?

> Lazy initialization keeps the program from making expensive API calls until the precise time they are needed.
> It's used to boost performance and save computational resources.

## 3. Content Safety in the Real World

A resident files this complaint: *"A man was assaulted at this intersection because the street light has been out for months."* This text describes real violence but is a legitimate safety concern. Should the system block it, flag it for human review, or pass it through? What factors would you weigh in making that decision?

> Ideally, the system should be robust enough to distinguish between actual violence and something that is part of a legitimate safety concern.
> In this case, the system should flag it for human review, so a parson can make a true determination.
> If the system blocks it, a legitimate safety concen doesn't make it to the department which can help.
> If the system passes it through, then the safety check is pointless.
93 changes: 53 additions & 40 deletions app/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,14 +44,12 @@ def _get_openai_client():
"""Lazily initialize the Azure OpenAI client."""
global _openai_client
if _openai_client is None:
# TODO: Uncomment and configure
# from openai import AzureOpenAI
# _openai_client = AzureOpenAI(
# azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
# api_key=os.environ["AZURE_OPENAI_API_KEY"],
# api_version="2024-10-21",
# )
raise NotImplementedError("Configure the Azure OpenAI client")
from openai import AzureOpenAI
_openai_client = AzureOpenAI(
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_key=os.environ["AZURE_OPENAI_API_KEY"],
api_version="2024-10-21",
)
return _openai_client


Expand All @@ -61,14 +59,12 @@ def _get_content_safety_client():
if _content_safety_client is None:
# NOTE: The Content Safety SDK handles API versioning internally --
# no api_version parameter is needed (unlike the OpenAI SDK).
# TODO: Uncomment and configure
# from azure.ai.contentsafety import ContentSafetyClient
# from azure.core.credentials import AzureKeyCredential
# _content_safety_client = ContentSafetyClient(
# endpoint=os.environ["AZURE_CONTENT_SAFETY_ENDPOINT"],
# credential=AzureKeyCredential(os.environ["AZURE_CONTENT_SAFETY_KEY"]),
# )
raise NotImplementedError("Configure the Content Safety client")
from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
_content_safety_client = ContentSafetyClient(
endpoint=os.environ["AZURE_CONTENT_SAFETY_ENDPOINT"],
credential=AzureKeyCredential(os.environ["AZURE_CONTENT_SAFETY_KEY"]),
)
return _content_safety_client


Expand All @@ -78,14 +74,12 @@ def _get_language_client():
if _language_client is None:
# NOTE: The Language SDK handles API versioning internally --
# no api_version parameter is needed (unlike the OpenAI SDK).
# TODO: Uncomment and configure
# from azure.ai.textanalytics import TextAnalyticsClient
# from azure.core.credentials import AzureKeyCredential
# _language_client = TextAnalyticsClient(
# endpoint=os.environ["AZURE_AI_LANGUAGE_ENDPOINT"],
# credential=AzureKeyCredential(os.environ["AZURE_AI_LANGUAGE_KEY"]),
# )
raise NotImplementedError("Configure the AI Language client")
from azure.ai.textanalytics import TextAnalyticsClient
from azure.core.credentials import AzureKeyCredential
_language_client = TextAnalyticsClient(
endpoint=os.environ["AZURE_AI_LANGUAGE_ENDPOINT"],
credential=AzureKeyCredential(os.environ["AZURE_AI_LANGUAGE_KEY"]),
)
return _language_client


Expand All @@ -101,14 +95,27 @@ def classify_311_request(request_text: str) -> dict:
Returns:
dict with keys: category, confidence, reasoning
"""
# TODO: Step 1.1 - Get the OpenAI client
# TODO: Step 1.2 - Call client.chat.completions.create() with:
# model=os.environ.get("AZURE_OPENAI_DEPLOYMENT", "gpt-4o")
# A system message that classifies into: Pothole, Noise Complaint,
# Trash/Litter, Street Light, Water/Sewer, Other
# response_format={"type": "json_object"}, temperature=0
# TODO: Step 1.3 - Parse the JSON response with json.loads()
raise NotImplementedError("Implement classify_311_request in Step 1")
client = _get_openai_client()
response = client.chat.completions.create(
model=os.environ.get("AZURE_OPENAI_DEPLOYMENT", "gpt-4o"),
messages=[
{
"role": "system",
"content": (
"You are a helpful assistant that classifies Memphis 311 service "
"requests into one of the following categories: Pothole, Noise Complaint, "
"Trash/Litter, Street Light, Water/Sewer, Other. Respond with a JSON object "
"with keys: category (one of the categories), confidence (0-1), and reasoning (brief explanation)."
),
},
{"role": "user", "content": request_text},
],
response_format={"type": "json_object"},
temperature=0,
)

result = json.loads(response.choices[0].message.content)
return result


# ---------------------------------------------------------------------------
Expand All @@ -123,10 +130,14 @@ def check_content_safety(text: str) -> dict:
Returns:
dict with keys: safe (bool), categories (dict of category: severity)
"""
# TODO: Step 2.1 - Get the Content Safety client
# TODO: Step 2.2 - Call client.analyze_text() with AnalyzeTextOptions
# TODO: Step 2.3 - Return safety results
raise NotImplementedError("Implement check_content_safety in Step 2")
client = _get_content_safety_client()

from azure.ai.contentsafety.models import AnalyzeTextOptions
result = client.analyze_text(AnalyzeTextOptions(text=text))

categories = {category.category: category.severity for category in result.categories_analysis}
safe = all(severity == 0 for severity in categories.values())
return {"safe": safe, "categories": categories}


# ---------------------------------------------------------------------------
Expand All @@ -141,10 +152,12 @@ def extract_key_phrases(text: str) -> list[str]:
Returns:
List of key phrase strings.
"""
# TODO: Step 3.1 - Get the Language client
# TODO: Step 3.2 - Call client.extract_key_phrases([text])
# TODO: Step 3.3 - Return the list of key phrases
raise NotImplementedError("Implement extract_key_phrases in Step 3")
client = _get_language_client()

response = client.extract_key_phrases([text])
key_phrases = response[0].key_phrases if not response[0].is_error else []

return key_phrases


def main():
Expand Down
Loading