-
Notifications
You must be signed in to change notification settings - Fork 124
Open
Description
Problem:
Goal:
Add support for ranking models in PyTorch.
New Functionality
- Models
- Support Ranking Models
- Systems
- Make sure torch ranking models can be served
Starting Point:
Implement base-classes of block-API in PyTorch
People: @marcromeyn
Currently the block-API is T4Rec is using a similar design to Keras to allow for modules that lazily initialize their variables. We would like to deprecate this in favor of a native way to achieve the same thing that could launched recently.
- Introducing Block models#1087
- Adding ParallelBlock models#1088
- Adding registry models#1090
- Adding Link models#1091
- Adding simple aggregations: concat & stack models#1092
- Adding sample_batch & sample_features models#1095
- Adding SchemaTrackingMixin models#1109
- Add Encoder & Predictor models#1112
- Adding RouterBlock models#1096
Input-blocks
People: @marcromeyn
- Implement
Continuous&EmbeddingTables - Implement
TabularInputBlock - Add Encoder & Predictor models#1112
- Add schema tracing models#1149
- Add support for sequential-features in input-blocks
- Do performance testing of holding multiple features in a single embedding-table
Output-blocks
People: @edknv & @marcromeyn
- Add BinaryOutput models#1099
- Add RegressionOutput models#1115
- Add CategoricalOutput models#1158
- Port TabularOutputBlock (for multi-task learning)
- Adding MMOE & PLE models#1173
Blocks
- Adding MLPBlock models#1093
- Adding CrossBlock (used in DCN-v2) models#1172
- Add DLRM block models#1162
Models
- Add DLRM Model models#1171
- Add DCNModel
- Export Model with Input / Output Schemas (Serve in Systems Ensemble)
- Ensure model outputs are compatible with Triton (tensor or NamedTuple instead of dict)
Examples
- Getting Started from #1159
- Serving Ranking Model with Systems (similar to Serving Ranking Models With Merlin Systems (TF) )
- multi-task model (multiple model named outputs)