Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
[![Review Assignment Due Date](https://classroom.github.com/assets/deadline-readme-button-22041afd0340ce965d47ae6ef1cefeee28c7c493a6346c4f15d667ab976d596c.svg)](https://classroom.github.com/a/tLTGCA4G)
---
title: "Activity 1 - Hello, Azure AI"
type: lab
Expand Down
8 changes: 8 additions & 0 deletions REFLECTION.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,22 @@

Answer these questions after completing the activity (2-3 sentences each). Connect your answers to specific things you observed while coding and experimenting.

******** Did not get code to work :(

## 1. Service Surprises

Which of the three Azure AI services (OpenAI, Content Safety, Language) surprised you the most? Connect this to something specific you observed during your experiments -- a response you didn't expect, a behavior that seemed too easy or too hard, or a result that made you rethink how the service works.

Changing the temperature of the Open AI client was surprising, seeing the change in results go from always the same, to slightly different, to very different.

## 2. Lazy Initialization

How would you explain the lazy initialization pattern to a colleague? Why is it used instead of creating clients at the top of the file?

Lazy initialization makes sure that running the program does not cause errors when we first attempt to call the funtions. If we did not lazy initialize, and our credentials were not yet loaded, we could receive an error message.

## 3. Content Safety in the Real World

A resident files this complaint: *"A man was assaulted at this intersection because the street light has been out for months."* This text describes real violence but is a legitimate safety concern. Should the system block it, flag it for human review, or pass it through? What factors would you weigh in making that decision?

The system should flag the message for human review so that it could be passed to the proper authorities. Implementing a check for violent or distressful situations could help flag the incoming text.
134 changes: 101 additions & 33 deletions app/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@

from dotenv import load_dotenv


load_dotenv()

# Resolve activity root so result.json lands in the right place regardless of CWD
Expand All @@ -44,15 +45,16 @@ def _get_openai_client():
"""Lazily initialize the Azure OpenAI client."""
global _openai_client
if _openai_client is None:
# TODO: Uncomment and configure
# from openai import AzureOpenAI
# _openai_client = AzureOpenAI(
# azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
# api_key=os.environ["AZURE_OPENAI_API_KEY"],
# api_version="2024-10-21",
# )
raise NotImplementedError("Configure the Azure OpenAI client")
#TODO: Uncomment and configure
from openai import AzureOpenAI
_openai_client = AzureOpenAI(
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_key=os.environ["AZURE_OPENAI_API_KEY"],
api_version="2024-10-21",
)
return _openai_client
# raise NotImplementedError("Configure the Azure OpenAI client")



def _get_content_safety_client():
Expand All @@ -62,14 +64,15 @@ def _get_content_safety_client():
# NOTE: The Content Safety SDK handles API versioning internally --
# no api_version parameter is needed (unlike the OpenAI SDK).
# TODO: Uncomment and configure
# from azure.ai.contentsafety import ContentSafetyClient
# from azure.core.credentials import AzureKeyCredential
# _content_safety_client = ContentSafetyClient(
# endpoint=os.environ["AZURE_CONTENT_SAFETY_ENDPOINT"],
# credential=AzureKeyCredential(os.environ["AZURE_CONTENT_SAFETY_KEY"]),
# )
raise NotImplementedError("Configure the Content Safety client")
from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
_content_safety_client = ContentSafetyClient(
endpoint=os.environ["AZURE_CONTENT_SAFETY_ENDPOINT"],
credential=AzureKeyCredential(os.environ["AZURE_CONTENT_SAFETY_KEY"]),
)
return _content_safety_client
# raise NotImplementedError("Configure the Content Safety client")



def _get_language_client():
Expand All @@ -79,14 +82,15 @@ def _get_language_client():
# NOTE: The Language SDK handles API versioning internally --
# no api_version parameter is needed (unlike the OpenAI SDK).
# TODO: Uncomment and configure
# from azure.ai.textanalytics import TextAnalyticsClient
# from azure.core.credentials import AzureKeyCredential
# _language_client = TextAnalyticsClient(
# endpoint=os.environ["AZURE_AI_LANGUAGE_ENDPOINT"],
# credential=AzureKeyCredential(os.environ["AZURE_AI_LANGUAGE_KEY"]),
# )
raise NotImplementedError("Configure the AI Language client")
from azure.ai.textanalytics import TextAnalyticsClient
from azure.core.credentials import AzureKeyCredential
_language_client = TextAnalyticsClient(
endpoint=os.environ["AZURE_AI_LANGUAGE_ENDPOINT"],
credential=AzureKeyCredential(os.environ["AZURE_AI_LANGUAGE_KEY"]),
)
return _language_client
# raise NotImplementedError("Configure the AI Language client")



# ---------------------------------------------------------------------------
Expand All @@ -101,15 +105,46 @@ def classify_311_request(request_text: str) -> dict:
Returns:
dict with keys: category, confidence, reasoning
"""
# TODO: Step 1.1 - Get the OpenAI client
# TODO: Step 1.2 - Call client.chat.completions.create() with:
# model=os.environ.get("AZURE_OPENAI_DEPLOYMENT", "gpt-4o")
# A system message that classifies into: Pothole, Noise Complaint,
# Trash/Litter, Street Light, Water/Sewer, Other
# response_format={"type": "json_object"}, temperature=0
# TODO: Step 1.3 - Parse the JSON response with json.loads()
raise NotImplementedError("Implement classify_311_request in Step 1")
# ---------------------------------------------------------------------------
# Step 1 - Classify a 311 request with Azure OpenAI
# ---------------------------------------------------------------------------
def classify_311_request(request_text: str) -> dict:
"""Send a Memphis 311 service request to Azure OpenAI for classification.

Args:
request_text: The citizen's complaint text.

Returns:
dict with keys: category, confidence, reasoning
"""
# Get the OpenAI client
client = _get_openai_client()

# Call client.chat.completions.create()
response = client.chat.completions.create(
model=os.environ.get("AZURE_OPENAI_DEPLOYMENT", "gpt-4o"),
messages=[
{
"role": "system",
"content": (
"You are a helpful assistant that classifies Memphis 311 service "
"requests into one of the following categories: Pothole, Noise Complaint, "
"Trash/Litter, Street Light, Water/Sewer, Other. Respond with a JSON object "
"with keys: category (one of the categories), confidence (0-1), and reasoning "
"(explaining why you chose that category)."
)
},
{"role": "user", "content": request_text},
],
response_format={"type": "json_object"},
temperature=0,
)

# Parse the JSON response
result = json.loads(response.choices[0].message.content)
print(result["category"], result["confidence"], result["reasoning"])
#if result is not None
return result

# ---------------------------------------------------------------------------
# TODO: Step 2 - Check content safety
Expand All @@ -124,9 +159,34 @@ def check_content_safety(text: str) -> dict:
dict with keys: safe (bool), categories (dict of category: severity)
"""
# TODO: Step 2.1 - Get the Content Safety client
client = _get_content_safety_client()

# TODO: Step 2.2 - Call client.analyze_text() with AnalyzeTextOptions
from azure.ai.contentsafety.models import AnalyzeTextOptions

response = client.analyze_text(AnalyzeTextOptions(text=text))

# TODO: Step 2.3 - Return safety results
raise NotImplementedError("Implement check_content_safety in Step 2")
############DOUBLE CHECK LOGIC###
categories_result = {}
total_severity = 0

for category in response.categories_analysis:

for category in response.categories_analysis:
category_name = category.category
severity = category.severity or 0
categories_result[category_name] = severity
total_severity += severity

# Determine if content is safe (no harmful content detected)
is_safe = total_severity == 0

return {
"safe": is_safe,
"categories": categories_result,
}



# ---------------------------------------------------------------------------
Expand All @@ -142,11 +202,19 @@ def extract_key_phrases(text: str) -> list[str]:
List of key phrase strings.
"""
# TODO: Step 3.1 - Get the Language client
client = _get_language_client()

# TODO: Step 3.2 - Call client.extract_key_phrases([text])
response = client.extract_key_phrases(documents=[text])

# TODO: Step 3.3 - Return the list of key phrases
raise NotImplementedError("Implement extract_key_phrases in Step 3")

if not response or response[0].is_error:
return []

return response[0].key_phrases



def main():
"""Main function -- call all three Azure AI services."""

Expand Down
Loading