You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"shortDescription": "This extension adds the functionality to your project to easily send requests to the \"Ollama\" AI, and get responses from it.",
11
+
"version": "1.0.0",
12
+
"description": [
13
+
"Create a simple action to send the following data to a Ollama AI server:",
14
+
"",
15
+
"- URL (The server's URL with port)",
16
+
"- Model (The model you want to generate the response)",
17
+
"- Prompt (The prompt you send to the server to reply to)"
18
+
],
19
+
"tags": [
20
+
"ollama",
21
+
"ai",
22
+
"llama",
23
+
"llama2",
24
+
"llama3",
25
+
"chat",
26
+
"bot"
27
+
],
28
+
"authorIds": [],
29
+
"dependencies": [],
30
+
"eventsFunctions": [
31
+
{
32
+
"description": "Sends the prompt string, the model string, and the stream boolean from the given structure.",
33
+
"fullName": "Send prompt to a model",
34
+
"functionType": "Action",
35
+
"name": "Request",
36
+
"sentence": "Send _PARAM2_ and _PARAM3_ to _PARAM1_, then store the response JSON in the variable \"Ollama_AI_JSON\"",
37
+
"events": [
38
+
{
39
+
"type": "BuiltinCommonInstructions::Standard",
40
+
"conditions": [],
41
+
"actions": [
42
+
{
43
+
"type": {
44
+
"value": "ModVarSceneTxt"
45
+
},
46
+
"parameters": [
47
+
"Ollama_AI.model",
48
+
"=",
49
+
"Model"
50
+
]
51
+
},
52
+
{
53
+
"type": {
54
+
"value": "ModVarSceneTxt"
55
+
},
56
+
"parameters": [
57
+
"Ollama_AI.prompt",
58
+
"=",
59
+
"Prompt"
60
+
]
61
+
},
62
+
{
63
+
"type": {
64
+
"value": "SetSceneVariableAsBoolean"
65
+
},
66
+
"parameters": [
67
+
"Ollama_AI.stream",
68
+
"False"
69
+
]
70
+
},
71
+
{
72
+
"type": {
73
+
"value": "SendAsyncRequest"
74
+
},
75
+
"parameters": [
76
+
"API_URL",
77
+
"ToJSON(Ollama_AI)",
78
+
"\"POST\"",
79
+
"",
80
+
"Ollama_AI_JSON",
81
+
"Ollama_AI_Error"
82
+
]
83
+
}
84
+
]
85
+
}
86
+
],
87
+
"parameters": [
88
+
{
89
+
"description": "The URL where the Ollama model is hosted (e.g. http://localhost:11434/api/generate)",
90
+
"longDescription": "The URL should be in this format: \"http://<ip address>:11434/api/generate\". If you are hosting and testing locally, use this URL: \"http://localhost:11434/api/generate\". Read the extension's GitHub issue on how to host your own server.",
"description": "The model to be used when generating a response",
97
+
"longDescription": "The recommended one is \"llama3\", an older version is \"llama2\", but you can also customize the models and use those. Read the extension's GitHub issue on how to do this.",
"description": "Your prompt to the AI, for example: \"Why is the sky blue?\"",
104
+
"longDescription": "The response will be stored in JSON in the variable \"Ollama_AI_JSON\". After that, you can convert the JSON to a structure. You can see how you can do it in the example on the extension's GitHub.",
0 commit comments