Skip to content

Commit cbe68a3

Browse files
authored
Merge pull request #1523 from Panchadip-128/Added-Generative-AI-&-LLM>Copilot-LLM-Ext-for-VS-Code-IDE-files
Added Generative AI & LLM>Copilot LLM ext for VS Code IDE files
2 parents ead412f + 056f6ff commit cbe68a3

File tree

15 files changed

+2154
-0
lines changed

15 files changed

+2154
-0
lines changed
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
{
2+
"env": {
3+
"browser": false,
4+
"commonjs": true,
5+
"es6": true,
6+
"node": true,
7+
"mocha": true
8+
},
9+
"parserOptions": {
10+
"ecmaVersion": 2018,
11+
"ecmaFeatures": {
12+
"jsx": true
13+
},
14+
"sourceType": "module"
15+
},
16+
"rules": {
17+
"no-const-assign": "warn",
18+
"no-this-before-super": "warn",
19+
"no-undef": "warn",
20+
"no-unreachable": "warn",
21+
"no-unused-vars": "warn",
22+
"constructor-super": "warn",
23+
"valid-typeof": "warn"
24+
}
25+
}
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
{
2+
// See https://go.microsoft.com/fwlink/?LinkId=733558
3+
// for the documentation about the extensions.json format
4+
"recommendations": [
5+
"dbaeumer.vscode-eslint",
6+
"ms-vscode.extension-test-runner"
7+
]
8+
}
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
// A launch configuration that launches the extension inside a new window
2+
// Use IntelliSense to learn about possible attributes.
3+
// Hover to view descriptions of existing attributes.
4+
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
5+
{
6+
"version": "0.2.0",
7+
"configurations": [
8+
{
9+
"name": "Run Extension",
10+
"type": "extensionHost",
11+
"request": "launch",
12+
"args": [
13+
"--extensionDevelopmentPath=${workspaceFolder}"
14+
]
15+
}
16+
]
17+
}
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
import { defineConfig } from '@vscode/test-cli';
2+
3+
export default defineConfig({
4+
files: 'test/**/*.test.js',
5+
});
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
.vscode/**
2+
.vscode-test/**
3+
test/**
4+
.gitignore
5+
.yarnrc
6+
vsc-extension-quickstart.md
7+
**/jsconfig.json
8+
**/*.map
9+
**/.eslintrc.json
10+
**/.vscode-test.*
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Change Log
2+
3+
All notable changes to the "llm-copilot" extension will be documented in this file.
4+
5+
Check [Keep a Changelog](http://keepachangelog.com/) for recommendations on how to structure this file.
6+
7+
## [Unreleased]
8+
9+
- Initial release
Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
LLM Copilot Extension in VS Code
2+
----------------------------------
3+
4+
1. Introduction:
5+
----------------
6+
Welcome to the documentation for the VS Code Copilot Extension, which I created. This
7+
extension was developed to enhance coding efficiency by integrating advanced language
8+
models like Llama 3.1, OpenAI’s GPT-3.5-turbo, and Gemini 1.5. These models offer powerful
9+
code generation, suggestion capabilities, and the ability to solve complex coding problems. The
10+
extension is designed with a user-friendly interface and can be easily integrated into your
11+
existing workflow.
12+
13+
2. Prerequisites:
14+
-----------------
15+
Before using this extension, we need to ensure that we have the following installed on our
16+
system:
17+
- Visual Studio Code (version 1.92.0 or higher)
18+
- Node.js (for running the extension)
19+
- TypeScript (for compiling TypeScript files)
20+
Additionally, we need to make sure that we have access to API keys from Together.ai, Groq,
21+
OpenAI, or other supported providers to interact with the language models.
22+
23+
3. Installation:
24+
------------------
25+
- First we clone this repository
26+
- We then run “npm install” to install all necessary dependencies.
27+
- Thereafter , we use the command npm run compile to compile the TypeScript files.
28+
- Now to test the extension,we need to press F5 in VS Code to open a new window with
29+
the extension running or run the following command in terminal window :
30+
code --extensionDevelopmentPath=D:\Extension\extension\ ( according to our
31+
exact file location path)
32+
33+
4. Overview of package.json
34+
----------------------------
35+
The package.json file is crucial as it defines the extension’s metadata, dependencies,
36+
and commands. Key elements include:
37+
- Name and Version: The extension is named codesuggestion with version 0.0.1.
38+
- Engines: Specifies compatibility with VS Code version 1.92.0 or higher.
39+
- Contributes: Defines the command extension.openChat to open the chat interface.
40+
- Scripts: Includes commands for compiling TypeScript, linting, and testing.
41+
- Dependencies: Includes axios for API calls and development dependencies for
42+
TypeScript, linting, and testing.
43+
44+
5. Setting Up the Command in extension.ts:
45+
------------------------------------------
46+
The extension.ts file contains the core logic:
47+
- Imports: Includes necessary modules like vscode and axios.
48+
- Activate Function: The entry point when the extension is activated, where the
49+
command extension.openChat is registered.
50+
- Webview Setup: The command triggers a Webview panel that displays the chat
51+
interface, allowing users to interact with the AI.
52+
53+
6. Making API Calls to AI Models:
54+
---------------------------------
55+
The getCodeSnippet function is central to the extension’s operation:
56+
- Functionality: Sends a POST request to the selected AI model's API, passing the user’s
57+
query.
58+
- Response Handling: The response is processed and the code snippet is returned and displayed in the Webview.
59+
60+
7. User Interface:
61+
-------------------
62+
The user interface is designed using HTML, CSS, and JavaScript:
63+
- Chat Interface: Users can input queries and select the desired AI model from a
64+
dropdown menu.
65+
- Response Display: The AI’s response is displayed in a code block format for easy
66+
copying and pasting.
67+
68+
8. Features:
69+
------------
70+
71+
- Model Integration: Supports switching between Llama 3.1, GPT-3.5-turbo, and Gemini models.
72+
- Code Generation: Generates code snippets, solves LeetCode & DSA problems, and supports autocompletion
73+
- User-Friendly Interface: Simplified design for ease of use, allowing users to interact with the AI seamlessly
74+
75+
Sample chat and prompt response demonstration:
76+
-----------------------------------------
77+
![sample_chat_interface](https://github.com/user-attachments/assets/43cab310-e93e-4040-bc4c-4390c99684f6)
78+
79+
Personalized chat interface to match your needs :
80+
-----------------------------------------
81+
![Personalized_response](https://github.com/user-attachments/assets/30071652-86ed-4eae-9da3-a59df866635c)
82+
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
const vscode = require('vscode');
2+
const axios = require('axios');
3+
4+
const TOGETHER_AI_API_KEY = 'your_actual_api_key_here';
5+
const GROQ_AI_API_KEY = 'your_actual_api_key_here';
6+
const LLAMA_API_KEY = 'your_actual_api_key_here';
7+
8+
async function getTogetherAIResponse(prompt) {
9+
const response = await axios.post('https://api.together.ai/v1/text/completion', {
10+
prompt: prompt
11+
}, {
12+
headers: {
13+
'Authorization': `Bearer ${TOGETHER_AI_API_KEY}`,
14+
'Content-Type': 'application/json'
15+
}
16+
});
17+
return response.data.text;
18+
}
19+
20+
async function getGroqAIResponse(prompt) {
21+
const response = await axios.post('https://api.groq.com/v1/complete', {
22+
prompt: prompt
23+
}, {
24+
headers: {
25+
'Authorization': `Bearer ${GROQ_AI_API_KEY}`,
26+
'Content-Type': 'application/json'
27+
}
28+
});
29+
return response.data.text;
30+
}
31+
32+
async function getLlamaResponse(prompt) {
33+
const response = await axios.post('https://api.llama.com/v1/complete', {
34+
prompt: prompt
35+
}, {
36+
headers: {
37+
'Authorization': `Bearer ${LLAMA_API_KEY}`,
38+
'Content-Type': 'application/json'
39+
}
40+
});
41+
return response.data.text;
42+
}
43+
44+
function activate(context) {
45+
let disposable = vscode.commands.registerCommand('my-ext.helloWorld', async () => {
46+
const prompt = await vscode.window.showInputBox({ prompt: 'Enter your prompt' });
47+
48+
if (prompt) {
49+
try {
50+
const togetherAIResponse = await getTogetherAIResponse(prompt);
51+
const groqAIResponse = await getGroqAIResponse(prompt);
52+
const llamaResponse = await getLlamaResponse(prompt);
53+
54+
vscode.window.showInformationMessage(`TogetherAI: ${togetherAIResponse}`);
55+
vscode.window.showInformationMessage(`GroqAI: ${groqAIResponse}`);
56+
vscode.window.showInformationMessage(`LLaMA: ${llamaResponse}`);
57+
} catch (error) {
58+
vscode.window.showErrorMessage('Error fetching response from LLMs');
59+
}
60+
}
61+
});
62+
63+
context.subscriptions.push(disposable);
64+
}
65+
66+
// @ts-ignore
67+
exports.activate = activate;
68+
69+
function deactivate() {}
70+
71+
module.exports = {
72+
// @ts-ignore
73+
activate,
74+
deactivate
75+
};
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
{
2+
"compilerOptions": {
3+
"module": "Node16",
4+
"target": "ES2022",
5+
"checkJs": true, /* Typecheck .js files. */
6+
"lib": [
7+
"ES2022"
8+
]
9+
},
10+
"exclude": [
11+
"node_modules"
12+
]
13+
}

0 commit comments

Comments
 (0)