Skip to content
Discussion options

You must be logged in to vote

Hey @ZachCalkins,

Good question!

And yes there is, you can use torch.nn.LazyLinear().

Check out the documentation above for an example.

But in essence, the "Lazy" means to "figure out the in_features parameter automatically.

For example:

# Original
self.classifier = nn.Sequential(
  nn.Flatten(),
  nn.Linear(in_features=hidden_units*7*7,
            out_features=output_shape)
)

Becomes:

# New with LazyLinear
self.classifier = nn.Sequential(
  nn.Flatten(),
  nn.LazyLinear(out_features=output_shape) # notice the no "in_features" (this is inferred by the layer)
)

Try it out and see how you go!

In fact, there are many "Lazy" layers (these are quite new in PyTorch), try searching the document…

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@ZachCalkins
Comment options

@shisirkha
Comment options

Answer selected by ZachCalkins
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants