Skip to content

Commit 6f46c17

Browse files
committed
modified APS quickstart to satisfy PR comments
1 parent d6ac9e5 commit 6f46c17

File tree

2 files changed

+133
-21
lines changed

2 files changed

+133
-21
lines changed

articles/cognitive-services/personalizer/includes/quickstart-sdk-python.md

Lines changed: 132 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ pip install azure-cognitiveservices-personalizer
4343

4444
### Create a new Python application
4545

46-
Create a new Python application file named "Personalizer_quickstart.py". This application will handle both the example scenario logic and calls the Personalizer APIs.
46+
Create a new Python application file named "personalizer_quickstart.py". This application will handle both the example scenario logic and calls the Personalizer APIs.
4747

4848
In the file, create variables for your resource's endpoint and subscription key.
4949

@@ -101,11 +101,96 @@ Recall for our grocery website scenario, actions are the possible food items to
101101

102102
```python
103103
actions_and_features = {
104-
'pasta_1': {'brand_info':{'company':'pasta_inc'}, 'attributes':{'qty':1, 'cuisine':'italian', 'price':12}, 'dietary_attributes':{'vegan': False, 'low_carb': False, 'high_protein': False, 'vegetarian': False, 'low_fat': True, 'low_sodium': True}},
105-
'bbq_1': {'brand_info':{'company':'ambisco'}, 'attributes':{'qty':2, 'category':'bbq', 'price':20}, 'dietary_attributes':{'vegan': False, 'low_carb': True, 'high_protein': True, 'vegetarian': False, 'low_fat': False, 'low_sodium': False}},
106-
'bao_1':{'brand_info':{'company':'bao_and_co'}, 'attributes':{'qty':4, 'category':'chinese', 'price':8}, 'dietary_attributes':{'vegan': False, 'low_carb': True, 'high_protein': True, 'vegetarian': False, 'low_fat': True, 'low_sodium': False}},
107-
'hummus_1': {'brand_info':{'company':'garbanzo_inc'}, 'attributes':{'qty':1, 'category':'breakfast', 'price':5 }, 'dietary_attributes':{'vegan': True, 'low_carb': False, 'high_protein': True, 'vegetarian': True, 'low_fat': False, 'low_sodium': False}}
104+
'pasta': {
105+
'brand_info': {
106+
'company':'pasta_inc'
107+
},
108+
'attributes': {
109+
'qty':1, 'cuisine':'italian',
110+
'price':12
111+
},
112+
'dietary_attributes': {
113+
'vegan': False,
114+
'low_carb': False,
115+
'high_protein': False,
116+
'vegetarian': False,
117+
'low_fat': True,
118+
'low_sodium': True
119+
}
120+
},
121+
'bbq': {
122+
'brand_info' : {
123+
'company': 'ambisco'
124+
},
125+
'attributes': {
126+
'qty': 2,
127+
'category': 'bbq',
128+
'price': 20
129+
},
130+
'dietary_attributes': {
131+
'vegan': False,
132+
'low_carb': True,
133+
'high_protein': True,
134+
'vegetarian': False,
135+
'low_fat': False,
136+
'low_sodium': False
137+
}
138+
},
139+
'bao': {
140+
'brand_info': {
141+
'company': 'bao_and_co'
142+
},
143+
'attributes': {
144+
'qty': 4,
145+
'category': 'chinese',
146+
'price': 8
147+
},
148+
'dietary_attributes': {
149+
'vegan': False,
150+
'low_carb': True,
151+
'high_protein': True,
152+
'vegetarian': False,
153+
'low_fat': True,
154+
'low_sodium': False
155+
}
156+
},
157+
'hummus': {
158+
'brand_info' : {
159+
'company': 'garbanzo_inc'
160+
},
161+
'attributes' : {
162+
'qty': 1,
163+
'category': 'breakfast',
164+
'price': 5
165+
},
166+
'dietary_attributes': {
167+
'vegan': True,
168+
'low_carb': False,
169+
'high_protein': True,
170+
'vegetarian': True,
171+
'low_fat': False,
172+
'low_sodium': False
173+
}
174+
},
175+
'veg_platter': {
176+
'brand_info': {
177+
'company': 'farm_fresh'
178+
},
179+
'attributes': {
180+
'qty': 1,
181+
'category': 'produce',
182+
'price': 7
183+
},
184+
'dietary_attributes': {
185+
'vegan': True,
186+
'low_carb': True,
187+
'high_protein': False,
188+
'vegetarian': True,
189+
'low_fat': True,
190+
'low_sodium': True
191+
}
108192
}
193+
}
109194

110195
def get_actions():
111196
res = []
@@ -117,29 +202,47 @@ def get_actions():
117202

118203
## Define users and their context features
119204

120-
Context can represent the current state of your application, system, environment, or user. The following code creates a dictionary with user preferences, and the `get_context()` function assembles the features into a list for one particular user, which will be used later when calling the Rank API. In our grocery website scenario, the context consists of dietary preferences, a historical average of the order price, and the web browser type. Let's assume our users are always on the move and include a location context feature, which we choose randomly to simulate their travels every time `get_context()` is called.
205+
Context can represent the current state of your application, system, environment, or user. The following code creates a dictionary with user preferences, and the `get_context()` function assembles the features into a list for one particular user, which will be used later when calling the Rank API. In our grocery website scenario, the context consists of dietary preferences, a historical average of the order price. Let's assume our users are always on the move and include a location, time of day, and application type, which we choose randomly to simulate changing contexts every time `get_context()` is called. Finally, `get_random_users()` creates a random set of 5 user names from the user profiles, which will be used to simulate Rank/Reward calls later on.
121206

122207
```python
123-
context_features = {
124-
'Bill': {'dietary_preferences': 'low_carb', 'avg_order_price': '0-20', 'browser_type': 'edge'},
125-
'Satya': {'dietary_preferences': 'low_sodium', 'avg_order_price': '201+', 'browser_type': 'safari'},
126-
'Amy': {'dietary_preferences': {'vegan', 'vegetarian'}, 'avg_order_price': '21-50', 'browser_type': 'edge'},
208+
user_profiles = {
209+
'Bill': {
210+
'dietary_preferences': 'low_carb',
211+
'avg_order_price': '0-20',
212+
'browser_type': 'edge'
213+
},
214+
'Satya': {
215+
'dietary_preferences': 'low_sodium',
216+
'avg_order_price': '201+',
217+
'browser_type': 'safari'
218+
},
219+
'Amy': {
220+
'dietary_preferences': {
221+
'vegan', 'vegetarian'
222+
},
223+
'avg_order_price': '21-50',
224+
'browser_type': 'edge'},
127225
}
128226

129227
def get_context(user):
130-
day_context = {'location': random.choice(['west', 'east', 'midwest'])}
131-
res = [context_features[user], day_context]
228+
location_context = {'location': random.choice(['west', 'east', 'midwest'])}
229+
time_of_day = {'time_of_day': random.choice(['morning', 'afternoon', 'evening'])}
230+
app_type = {'application_type': random.choice(['edge', 'safari', 'edge_mobile', 'mobile_app'])}
231+
res = [user_profiles[user], location_context, time_of_day, app_type]
132232
return res
233+
234+
def get_random_users(k = 5):
235+
return random.choices(list(user_profiles.keys()), k=k)
133236
```
134237

135-
The context features in this quick-start are simplistic, however, in a real production system, designing your [features] (../concepts-features.md) and [evaluating their effectiveness](../concept-feature-evaluation.md) can be non-trivial. You can refer to the aforementioned documentation for guidance
238+
The context features in this quick-start are simplistic, however, in a real production system, designing your [features](../concepts-features.md) and [evaluating their effectiveness](../concept-feature-evaluation.md) can be non-trivial. You can refer to the aforementioned documentation for guidance
136239

137240

138241
## Define a reward score based on user behavior
139242

140243
The reward score can be considered an indication how "good" the personalized action is. In a real production system, the reward score should be designed to align with your business objectives and KPIs. For example, your application code can be instrumented to detect a desired user behavior (for example, a purchase) that aligns with your business objective (for example, increased revenue).
141244

142-
In our grocery website scenario, we have three users: Bill, Satya, and Amy each with their own preferences. If a user purchases the product chosen by Personalizer, a reward score of 1.0 will be sent to the Reward API. Otherwise, the default reward of 0.0 will be used. In a real production system, determining how to design the [reward](../concept-rewards.md) can be non-trivial and may require some experimentation.
245+
In our grocery website scenario, we have three users: Bill, Satya, and Amy each with their own preferences. If a user purchases the product chosen by Personalizer, a reward score of 1.0 will be sent to the Reward API. Otherwise, the default reward of 0.0 will be used. In a real production system, determining how to design the [reward](../concept-rewards.md) may require some experimentation.
143246

144247
In the code below, the users' preferences and responses to the actions is hard-coded as a series of conditional statements, and explanatory text is included in the code for demonstrative purposes. In a real scenario, Personalizer will learn user preferences from the data sent in Rank and Reward API calls. You won't define these explicitly as in the example code.
145248

@@ -163,7 +266,7 @@ def get_reward_score(user, actionid, context):
163266
reward_score = 1.0
164267
print("Bill is visiting his friend Warren in the midwest. There he's willing to spend more on food as long as it's low carb, so Bill bought" + actionid + ".")
165268

166-
elif (action['attributes']['price'] > 10) and (context[1]['location'] != "midwest"):
269+
elif (action['attributes']['price'] >= 10) and (context[1]['location'] != "midwest"):
167270
print("Bill didn't buy", actionid, "because the price was too high when not visting his friend Warren in the midwest.")
168271

169272
elif (action['dietary_attributes']['low_carb'] == False) and (context[1]['location'] == "midwest"):
@@ -172,7 +275,7 @@ def get_reward_score(user, actionid, context):
172275
elif user == 'Satya':
173276
if action['dietary_attributes']['low_sodium'] == True:
174277
reward_score = 1.0
175-
print("Satya is health conscious, so he bought", actionid, "since it's low in sodium.")
278+
print("Satya is health conscious, so he bought", actionid,"since it's low in sodium.")
176279
else:
177280
print("Satya did not buy", actionid, "because it's not low sodium.")
178281

@@ -193,7 +296,7 @@ A Personalizer event cycle consists of [Rank](#request-the-best-action) and [Rew
193296

194297
### Request the best action
195298

196-
In a Rank call, you need to provide at least two arguments: a list of `RankActions` (_actions and their features_), and a list of (_context_) features. The response will include the `reward_action_id`, which is the ID of the action Personalizer has determined is best for the given context. The response also includes the `event_id`, which is needed in the Reward API so Personalize knows how to link the data from the Reward and Rank calls. For more information, refer to the [Rank API docs](/rest/api/personalizer/1.0/rank/rank).
299+
In a Rank call, you need to provide at least two arguments: a list of `RankableActions` (_actions and their features_), and a list of (_context_) features. The response will include the `reward_action_id`, which is the ID of the action Personalizer has determined is best for the given context. The response also includes the `event_id`, which is needed in the Reward API so Personalize knows how to link the data from the Reward and Rank calls. For more information, refer to the [Rank API docs](/rest/api/personalizer/1.0/rank/rank).
197300

198301

199302
### Send a reward
@@ -203,13 +306,13 @@ In a Reward call, you need to provide two arguments: the `event_id`, which links
203306

204307
### Run a Rank and Reward cycle
205308

206-
The following code loops through a single cycle of Rank and Reward calls for each of the three example users, then prints relevant information to the console at each step.
309+
The following code loops through five cycles of Rank and Reward calls for a randomly selected set of example users, then prints relevant information to the console at each step.
207310

208311
```python
209312
def run_personalizer_cycle():
210313
actions = get_actions()
211-
212-
for user in context_features.keys():
314+
user_list = get_random_users()
315+
for user in user_list:
213316
print("------------")
214317
print("User:", user, "\n")
215318
context = get_context(user)
@@ -227,9 +330,18 @@ def run_personalizer_cycle():
227330
client.events.reward(event_id=eventid, value=reward_score)
228331
print("\nA reward score of", reward_score , "was sent to Personalizer.")
229332
print("------------\n")
333+
334+
continue_loop = True
335+
while continue_loop:
336+
run_personalizer_cycle()
337+
338+
br = input("Press Q to exit, or any other key to run another loop: ")
339+
if(br.lower()=='q'):
340+
continue_loop = False
230341
```
231342

232343

344+
233345
## Run the program
234346

235347
Once all the above code is included in your Python file, you can run it from your application directory.

articles/cognitive-services/personalizer/quickstart-personalizer-sdk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ zone_pivot_groups: programming-languages-set-six
1616

1717
# Quickstart: Getting started with the Personalizer client library
1818

19-
Imagine a scenario where a grocery e-retailer wishes to increase revenue by showing relevant and personalized products to each customer visiting their website. On the main page, there's a "Featured Product" section that displays a product to prospective customers. However, the e-retailer would like to determine how to show the right product to the right customer in order to maximize the likelihood of a purchase.
19+
Imagine a scenario where a grocery e-retailer wishes to increase revenue by showing relevant and personalized products to each customer visiting their website. On the main page, there's a "Featured Product" section that displays a prepared meal product to prospective customers. However, the e-retailer would like to determine how to show the right product to the right customer in order to maximize the likelihood of a purchase.
2020

2121
In this quick-start, you'll learn how to use the Azure Personalizer service to do solve this problem in an automated, scalable, and adaptable fashion using the power of reinforcement learning. You'll learn how to create actions and their features, context features, and reward scores. You'll use the Personalizer client library to make calls to the [Rank and Reward APIs](what-is-personalizer.md#rank-and-reward-apis). You'll also run a cycle of Rank and Reward calls for three example users.
2222

0 commit comments

Comments
 (0)