Skip to content
This repository was archived by the owner on Jul 10, 2025. It is now read-only.

Commit 4a3905d

Browse files
authored
Added design review notes
1 parent 93601a4 commit 4a3905d

File tree

1 file changed

+16
-0
lines changed

1 file changed

+16
-0
lines changed

rfcs/20180905-deprecate-collections.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -610,3 +610,19 @@ Tentatively, we have the following plan for deprecating collections
610610

611611

612612
1. Figure out a concrete plan for removing CONCATENABLE_VARIABLES collection.
613+
614+
# Design review notes
615+
* Going through dependencies of this effort.
616+
* Object based checkpointing slated for 2.0.
617+
* V2 metrics are in 2.0 (so old metrics deprecated).
618+
* Functional control flow slated for 2.0
619+
* Question: What is special about tables?
620+
* Answer: Tables are immutable, they have to be initialized. We currently handle variables specially in defun, we could generalize that to tables, but haven’t looked into that yet. Tables are not variables today because they have non-Tensor representations (e.g., a std::hash_map)
621+
* Question: Can immutable tables be variables with a variant in them?
622+
* Answer: This is a worthy direction to explore but we might not have enough time to do this.
623+
* Question: How do we create the “init_op” for serving? Do we need to track tables in tf.Checkpointable like we do variables?
624+
* Answer: Making tables be variables with variants does this automatically. Serialized format (for SavedModel) doesn’t need to change.
625+
* Question: How do we track all the tables?
626+
* Answer: These tables are created by feature columns, which are used to create a Layer object, that Layer object can track all the tables/assets of interest. Make tables Checkpointable, and use the infrastructure for tracking checkpointables. Or a parallel mechanism? For V1, have a table_creator_scope() that can track all created tables
627+
* Other collections
628+
* UPDATE_OPS: defun the layer and it is no longer a problem? This may be problematic for parameter server training if we have to wait for the updates before next iteration. Can be addressed by playing with strictness of function execution.

0 commit comments

Comments
 (0)