How to turn off display of think process tokens when using qwen models with ollama #1446
              
                Unanswered
              
          
                  
                    
                      cainiaocome
                    
                  
                
                  asked this question in
                Q&A
              
            Replies: 2 comments
-
| 
         You can technically override response.content in   | 
  
Beta Was this translation helpful? Give feedback.
                  
                    0 replies
                  
                
            -
| 
         Moving this to discussions as its not really an issue  | 
  
Beta Was this translation helpful? Give feedback.
                  
                    0 replies
                  
                
            
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
        
    
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Background:
CopilotChatconfigured through lazyvim.Below is a chat session asking about a
hello-worldscript written in Python, as you can see the think process tokens(wrapped between<think>...</think>it seems) are in the output as well. Is there a way to turn them off as I only need the output after that. Thanks!Beta Was this translation helpful? Give feedback.
All reactions