6
6
"name" : " wide.ipynb" ,
7
7
"version" : " 0.3.2" ,
8
8
"provenance" : [],
9
- "toc_visible" : true ,
10
9
"collapsed_sections" : [
11
10
" MWW1TyjaecRh"
12
- ]
11
+ ],
12
+ "toc_visible" : true
13
13
},
14
14
"kernelspec" : {
15
15
"display_name" : " Python 3" ,
33
33
"id" : " mOtR1FzCef-u" ,
34
34
"colab_type" : " code" ,
35
35
"colab" : {}
36
-
37
36
},
38
37
"cell_type" : " code" ,
39
38
"source" : [
1077
1076
" [age_buckets, 'education', 'occupation'], hash_bucket_size=1000),\n " ,
1078
1077
" ]\n " ,
1079
1078
" \n " ,
1080
- " model_dir = tempfile.mkdtemp()\n " ,
1081
1079
" model = tf.estimator.LinearClassifier(\n " ,
1082
- " model_dir=model_dir , feature_columns=base_columns + crossed_columns)"
1080
+ " model_dir=tempfile.mkdtemp() , feature_columns=base_columns + crossed_columns)"
1083
1081
],
1084
1082
"execution_count" : 0 ,
1085
1083
"outputs" : []
1138
1136
"source" : [
1139
1137
" results = model.evaluate(test_inpf)\n " ,
1140
1138
" clear_output()\n " ,
1141
- " for key in sorted(results ):\n " ,
1142
- " print('%s: %0.2f' % (key, results[key] ))"
1139
+ " for key,value in sorted(result.items() ):\n " ,
1140
+ " print('%s: %0.2f' % (key, value ))"
1143
1141
],
1144
1142
"execution_count" : 0 ,
1145
1143
"outputs" : []
1199
1197
"source" : [
1200
1198
" If you'd like to see a working end-to-end example, you can download our\n " ,
1201
1199
" [example code](https://github.com/tensorflow/models/tree/master/official/wide_deep/census_main.py)\n " ,
1202
- " and set the `model_type` flag to `wide`.\n " ,
1203
- " \n " ,
1204
- " ## Adding Regularization to Prevent Overfitting\n " ,
1205
- " \n " ,
1206
- " Regularization is a technique used to avoid **overfitting**. Overfitting happens\n " ,
1207
- " when your model does well on the data it is trained on, but worse on test data\n " ,
1208
- " that the model has not seen before, such as live traffic. Overfitting generally\n " ,
1209
- " occurs when a model is excessively complex, such as having too many parameters\n " ,
1210
- " relative to the number of observed training data. Regularization allows for you\n " ,
1211
- " to control your model's complexity and makes the model more generalizable to\n " ,
1212
- " unseen data.\n " ,
1213
- " \n " ,
1214
- " In the Linear Model library, you can add L1 and L2 regularizations to the model\n " ,
1215
- " as:"
1216
- ]
1217
- },
1218
- {
1219
- "metadata" : {
1220
- "id" : " cVv2HsqocYxO" ,
1221
- "colab_type" : " code" ,
1222
- "colab" : {}
1223
- },
1224
- "cell_type" : " code" ,
1225
- "source" : [
1226
- " #TODO(markdaoust): is the regularization strength here not working?\n " ,
1227
- " model = tf.estimator.LinearClassifier(\n " ,
1228
- " model_dir=model_dir, feature_columns=base_columns + crossed_columns,\n " ,
1229
- " optimizer=tf.train.FtrlOptimizer(\n " ,
1230
- " learning_rate=0.1,\n " ,
1231
- " l1_regularization_strength=0.1,\n " ,
1232
- " l2_regularization_strength=0.1))\n " ,
1233
- " \n " ,
1234
- " model.train(train_inpf)\n " ,
1235
- " \n " ,
1236
- " results = model.evaluate(test_inpf)\n " ,
1237
- " clear_output()\n " ,
1238
- " for key in sorted(results):\n " ,
1239
- " print('%s: %0.2f' % (key, results[key]))"
1240
- ],
1241
- "execution_count" : 0 ,
1242
- "outputs" : []
1243
- },
1244
- {
1245
- "metadata" : {
1246
- "id" : " 5AqvPEQwcYxU" ,
1247
- "colab_type" : " text"
1248
- },
1249
- "cell_type" : " markdown" ,
1250
- "source" : [
1251
- " One important difference between L1 and L2 regularization is that L1\n " ,
1252
- " regularization tends to make model weights stay at zero, creating sparser\n " ,
1253
- " models, whereas L2 regularization also tries to make the model weights closer to\n " ,
1254
- " zero but not necessarily zero. Therefore, if you increase the strength of L1\n " ,
1255
- " regularization, you will have a smaller model size because many of the model\n " ,
1256
- " weights will be zero. This is often desirable when the feature space is very\n " ,
1257
- " large but sparse, and when there are resource constraints that prevent you from\n " ,
1258
- " serving a model that is too large.\n " ,
1259
- " \n " ,
1260
- " In practice, you should try various combinations of L1, L2 regularization\n " ,
1261
- " strengths and find the best parameters that best control overfitting and give\n " ,
1262
- " you a desirable model size."
1200
+ " and set the `model_type` flag to `wide`."
1263
1201
]
1264
1202
},
1265
1203
{
1332
1270
]
1333
1271
}
1334
1272
]
1335
- }
1273
+ }
0 commit comments