ValueError: Target module CLIPEncoderLayer(
(self_attn): CLIPAttention(
(k_proj): Linear4bit(in_features=1024, out_features=1024, bias=True)
(v_proj): Linear4bit(in_features=1024, out_features=1024, bias=True)
(q_proj): Linear4bit(in_features=1024, out_features=1024, bias=True)
(out_proj): Linear4bit(in_features=1024, out_features=1024, bias=True)
)
(layer_norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(mlp): CLIPMLP(
(activation_fn): QuickGELUActivation()
(fc1): Linear4bit(in_features=1024, out_features=4096, bias=True)
(fc2): Linear4bit(in_features=4096, out_features=1024, bias=True)
)
(layer_norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
) is not supported. Currently, only the following modules are supported: torch.nn.Linear, torch.nn.Embedding, torch.nn.Conv2d, transformers.pytorch_utils.Conv1D.