Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions nb/Gemma3N_(4B)-Conversational.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -456,6 +456,7 @@
" dtype = None, # None for auto detection\n",
" max_seq_length = 1024, # Choose any for long context!\n",
" load_in_4bit = True, # 4 bit quantization to reduce memory\n",
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment mentions a "vision tower", which might be confusing in a conversational notebook. For clarity, consider a more general comment about the incompatibility with flex_attention.

Suggested change
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
" attn_implementation = \"eager\", # Force eager attention due to Gemma3N incompatibility with flex_attention\n",

" full_finetuning = False, # [NEW!] We have full finetuning now!\n",
" # token = \"YOUR_HF_TOKEN\", # HF Token for gated models\n",
")"
Expand Down Expand Up @@ -1920,6 +1921,7 @@
" model_name = \"gemma_3n_lora\", # YOUR MODEL YOU USED FOR TRAINING\n",
" max_seq_length = 2048,\n",
" load_in_4bit = True,\n",
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment mentions a "vision tower", which might be confusing in a conversational notebook. For clarity, consider a more general comment about the incompatibility with flex_attention.

Suggested change
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
" attn_implementation = \"eager\", # Force eager attention due to Gemma3N incompatibility with flex_attention\n",

" )\n",
"\n",
"messages = [{\n",
Expand Down
2 changes: 2 additions & 0 deletions nb/Gemma3N_(4B)-Vision.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -428,6 +428,7 @@
"model, processor = FastVisionModel.from_pretrained(\n",
" \"unsloth/gemma-3n-E4B\",\n",
" load_in_4bit = True, # Use 4bit to reduce memory use. False for 16bit LoRA.\n",
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
" use_gradient_checkpointing = \"unsloth\", # True or \"unsloth\" for long context\n",
")"
]
Expand Down Expand Up @@ -1424,6 +1425,7 @@
" model, processor = FastVisionModel.from_pretrained(\n",
" model_name = \"gemma_3n_lora\", # YOUR MODEL YOU USED FOR TRAINING\n",
" load_in_4bit = True, # Set to False for 16bit LoRA\n",
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
" )\n",
" FastVisionModel.for_inference(model) # Enable for inference!\n",
"\n",
Expand Down
2 changes: 2 additions & 0 deletions nb/Kaggle-Gemma3N_(4B)-Conversational.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -456,6 +456,7 @@
" dtype = None, # None for auto detection\n",
" max_seq_length = 1024, # Choose any for long context!\n",
" load_in_4bit = True, # 4 bit quantization to reduce memory\n",
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment mentions a "vision tower", which might be confusing in a conversational notebook. For clarity, consider a more general comment about the incompatibility with flex_attention.

Suggested change
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
" attn_implementation = \"eager\", # Force eager attention due to Gemma3N incompatibility with flex_attention\n",

" full_finetuning = False, # [NEW!] We have full finetuning now!\n",
" # token = \"YOUR_HF_TOKEN\", # HF Token for gated models\n",
")"
Expand Down Expand Up @@ -1920,6 +1921,7 @@
" model_name = \"gemma_3n_lora\", # YOUR MODEL YOU USED FOR TRAINING\n",
" max_seq_length = 2048,\n",
" load_in_4bit = True,\n",
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment mentions a "vision tower", which might be confusing in a conversational notebook. For clarity, consider a more general comment about the incompatibility with flex_attention.

Suggested change
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
" attn_implementation = \"eager\", # Force eager attention due to Gemma3N incompatibility with flex_attention\n",

" )\n",
"\n",
"messages = [{\n",
Expand Down
2 changes: 2 additions & 0 deletions nb/Kaggle-Gemma3N_(4B)-Vision.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -428,6 +428,7 @@
"model, processor = FastVisionModel.from_pretrained(\n",
" \"unsloth/gemma-3n-E4B\",\n",
" load_in_4bit = True, # Use 4bit to reduce memory use. False for 16bit LoRA.\n",
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
" use_gradient_checkpointing = \"unsloth\", # True or \"unsloth\" for long context\n",
")"
]
Expand Down Expand Up @@ -1424,6 +1425,7 @@
" model, processor = FastVisionModel.from_pretrained(\n",
" model_name = \"gemma_3n_lora\", # YOUR MODEL YOU USED FOR TRAINING\n",
" load_in_4bit = True, # Set to False for 16bit LoRA\n",
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
" )\n",
" FastVisionModel.for_inference(model) # Enable for inference!\n",
"\n",
Expand Down
2 changes: 2 additions & 0 deletions original_template/Gemma3N_(4B)-Conversational.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -432,6 +432,7 @@
" dtype = None, # None for auto detection\n",
" max_seq_length = 1024, # Choose any for long context!\n",
" load_in_4bit = True, # 4 bit quantization to reduce memory\n",
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment mentions a "vision tower", which might be confusing in a conversational notebook. For clarity, consider a more general comment about the incompatibility with flex_attention.

Suggested change
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
" attn_implementation = \"eager\", # Force eager attention due to Gemma3N incompatibility with flex_attention\n",

" full_finetuning = False, # [NEW!] We have full finetuning now!\n",
" # token = \"hf_...\", # use one if using gated models\n",
")"
Expand Down Expand Up @@ -1896,6 +1897,7 @@
" model_name = \"gemma-3n\", # YOUR MODEL YOU USED FOR TRAINING\n",
" max_seq_length = 2048,\n",
" load_in_4bit = True,\n",
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment mentions a "vision tower", which might be confusing in a conversational notebook. For clarity, consider a more general comment about the incompatibility with flex_attention.

Suggested change
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
" attn_implementation = \"eager\", # Force eager attention due to Gemma3N incompatibility with flex_attention\n",

" )\n",
"\n",
"messages = [{\n",
Expand Down
2 changes: 2 additions & 0 deletions original_template/Gemma3N_(4B)-Vision.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -404,6 +404,7 @@
"model, processor = FastVisionModel.from_pretrained(\n",
" \"unsloth/gemma-3n-E4B\",\n",
" load_in_4bit = True, # Use 4bit to reduce memory use. False for 16bit LoRA.\n",
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
" use_gradient_checkpointing = \"unsloth\", # True or \"unsloth\" for long context\n",
")"
]
Expand Down Expand Up @@ -1400,6 +1401,7 @@
" model, processor = FastVisionModel.from_pretrained(\n",
" model_name=\"lora_model\", # YOUR MODEL YOU USED FOR TRAINING\n",
" load_in_4bit=True, # Set to False for 16bit LoRA\n",
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with the surrounding code (e.g., model_name=\"lora_model\", load_in_4bit=True), consider removing the spaces around the = operator.

Suggested change
" attn_implementation = \"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",
" attn_implementation=\"eager\", # Gemma 3N vision tower is incompatible with flex_attention\n",

" )\n",
" FastVisionModel.for_inference(model) # Enable for inference!\n",
"\n",
Expand Down