Skip to content

Commit 9d512f9

Browse files
author
NeuroTinkerLab
committed
Initial commit
0 parents  commit 9d512f9

File tree

8 files changed

+521
-0
lines changed

8 files changed

+521
-0
lines changed

.gitignore

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
# Virtual Environment
2+
venv/
3+
__pycache__/
4+
*.pyc
5+
6+
# IDE / Editor specific
7+
.idea/
8+
.vscode/

README.md

Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
# Synt-E: The Protocol for Talking to AIs 🚀
2+
3+
Synt-E is a "language" designed to give instructions to Artificial Intelligences (LLMs) as efficiently as possible. Instead of writing long sentences, you use short, dense commands that the AI understands better, faster, and at a lower cost.
4+
5+
---
6+
7+
## 🤔 Why Does Synt-E Exist? The Problem
8+
9+
When we talk to an AI like ChatGPT, we use human language, which is full of words that are useless to a machine.
10+
11+
**BEFORE (Natural Language):**
12+
> "Hi, could you please write me a Python script to analyze data from a CSV file?"
13+
*(Too many words, too many "tokens", risk of ambiguity)*
14+
15+
**AFTER (Synt-E):**
16+
> `task:code lang:python action:analyze_data format:csv`
17+
*(Few words, zero ambiguity, maximum efficiency)*
18+
19+
---
20+
21+
## ✨ How Does the Magic Work? The Logic Behind Synt-E
22+
23+
The secret is simple: **modern AIs have been trained on almost the entire Internet, and most of the Internet is in English.**
24+
25+
They have seen **billions of patterns** of code, terminal commands, configuration files, and technical texts in English. For them, technical English is not a language; it is their **native language**.
26+
27+
- **Technical English is a highway:** Giving a command in Synt-E is like getting on the highway. The request reaches its destination quickly and smoothly.
28+
- **Other languages are country roads:** The AI understands them, but it has to "translate" and "interpret" more, wasting time and resources.
29+
30+
### The Concrete Advantages
31+
1. **💰 Token Savings (and Money):** Fewer words mean fewer "tokens" to pay for if you use a paid service. Locally, it means less load on your CPU/GPU.
32+
2. **⚡ Superior Speed:** The AI doesn't have to think about how to interpret your pleasantries. It gets straight to the point, giving you an answer faster.
33+
3. **✅ Better Answers:** By eliminating ambiguity, you reduce the risk of the AI misunderstanding and giving you a wrong or incomplete answer.
34+
35+
---
36+
37+
## 💻 Try It Now on Your PC! (with Ollama)
38+
39+
This project includes a simple Python program that transforms your sentences in Italian (or any other language) into the Synt-E protocol, using an AI that runs **free and offline** on your computer.
40+
41+
### Step 1: Prerequisites
42+
1. **Python:** Make sure you have it installed. If you don't, download it from [python.org](https://python.org).
43+
2. **Ollama:** Install Ollama to run AIs locally. Download it from [ollama.com](https://ollama.com).
44+
45+
### Step 2: Choose the Right Model (IMPORTANT)
46+
Not all AI models are suitable for this task.
47+
- **"Assistant" Models (like Llama 3.1 Instruct):** They are too "helpful." If you ask them to translate a request to write code, they will write the code instead of translating it. **They are the least suitable.**
48+
- **"Raw" or "Unfiltered" Models (like GPT-OSS or Dolphin):** They are more flexible and obedient. They understand their role as a "compiler" and do not try to perform the task for you. **They are the best for this script.**
49+
50+
From your list, the winner was **`gpt-oss:20b`**.
51+
52+
### Step 3: Install and Run
53+
1. **Download the model:** Open the terminal and run this command.
54+
```bash
55+
ollama pull gpt-oss:20b
56+
```
57+
58+
2. **Install the library:** In the project folder, run this command.
59+
```bash
60+
pip install ollama
61+
```
62+
63+
3. **Run the script:** Make sure Ollama is running, then run the program.
64+
```bash
65+
python synt_e.py
66+
```
67+
68+
### Usage Examples
69+
Now you can write your requests. The program will send them to your local model and return the translation in Synt-E.
70+
71+
**Example 1: Technical Request**
72+
> **YOU >** Write a Python script that uses Keras for sentiment analysis.
73+
>
74+
> **AI >** `task:write_script language:python libraries:keras model:RNN dataset:movie_reviews task:sentiment_analysis`
75+
76+
**Example 2: Creative Request**
77+
> **YOU >** Generate an image of a red dragon, in watercolor style.
78+
>
79+
> **AI >** `task:generate_image subject:red_dragon style:watercolor`
80+
81+
**Example 3: Complex Request**
82+
> **YOU >** Prepare a PowerPoint presentation for the quarterly meeting with the CEO on the topic of sales.
83+
>
84+
> **AI >** `task:create_presentation format:powerpoint event:quarterly_meeting audience:ceo topic:sales`
85+
86+
---
87+
88+
## 🏗️ The Future of the Project
89+
This script is just a prototype. The complete architecture of Synt-E (which we have explored) includes:
90+
- A **hybrid engine** that uses fast rules for simple commands.
91+
- A **security** system to block sensitive data.
92+
- An **ecosystem** with extensions for editors like VS Code.
93+
94+
Have fun compiling your thoughts!

backup/README.md

Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
# Synt-E: The Protocol for Talking to AIs 🚀
2+
3+
Synt-E is a "language" designed to give instructions to Artificial Intelligences (LLMs) as efficiently as possible. Instead of writing long sentences, you use short, dense commands that the AI understands better, faster, and at a lower cost.
4+
5+
---
6+
7+
## 🤔 Why Does Synt-E Exist? The Problem
8+
9+
When we talk to an AI like ChatGPT, we use human language, which is full of words that are useless to a machine.
10+
11+
**BEFORE (Natural Language):**
12+
> "Hi, could you please write me a Python script to analyze data from a CSV file?"
13+
*(Too many words, too many "tokens", risk of ambiguity)*
14+
15+
**AFTER (Synt-E):**
16+
> `task:code lang:python action:analyze_data format:csv`
17+
*(Few words, zero ambiguity, maximum efficiency)*
18+
19+
---
20+
21+
## ✨ How Does the Magic Work? The Logic Behind Synt-E
22+
23+
The secret is simple: **modern AIs have been trained on almost the entire Internet, and most of the Internet is in English.**
24+
25+
They have seen **billions of patterns** of code, terminal commands, configuration files, and technical texts in English. For them, technical English is not a language; it is their **native language**.
26+
27+
- **Technical English is a highway:** Giving a command in Synt-E is like getting on the highway. The request reaches its destination quickly and smoothly.
28+
- **Other languages are country roads:** The AI understands them, but it has to "translate" and "interpret" more, wasting time and resources.
29+
30+
### The Concrete Advantages
31+
1. **💰 Token Savings (and Money):** Fewer words mean fewer "tokens" to pay for if you use a paid service. Locally, it means less load on your CPU/GPU.
32+
2. **⚡ Superior Speed:** The AI doesn't have to think about how to interpret your pleasantries. It gets straight to the point, giving you an answer faster.
33+
3. **✅ Better Answers:** By eliminating ambiguity, you reduce the risk of the AI misunderstanding and giving you a wrong or incomplete answer.
34+
35+
---
36+
37+
## 💻 Try It Now on Your PC! (with Ollama)
38+
39+
This project includes a simple Python program that transforms your sentences in Italian (or any other language) into the Synt-E protocol, using an AI that runs **free and offline** on your computer.
40+
41+
### Step 1: Prerequisites
42+
1. **Python:** Make sure you have it installed. If you don't, download it from [python.org](https://python.org).
43+
2. **Ollama:** Install Ollama to run AIs locally. Download it from [ollama.com](https://ollama.com).
44+
45+
### Step 2: Choose the Right Model (IMPORTANT)
46+
Not all AI models are suitable for this task.
47+
- **"Assistant" Models (like Llama 3.1 Instruct):** They are too "helpful." If you ask them to translate a request to write code, they will write the code instead of translating it. **They are the least suitable.**
48+
- **"Raw" or "Unfiltered" Models (like GPT-OSS or Dolphin):** They are more flexible and obedient. They understand their role as a "compiler" and do not try to perform the task for you. **They are the best for this script.**
49+
50+
From your list, the winner was **`gpt-oss:20b-unlocked`**.
51+
52+
### Step 3: Install and Run
53+
1. **Download the model:** Open the terminal and run this command.
54+
```bash
55+
ollama pull gpt-oss:20b-unlocked
56+
```
57+
58+
2. **Install the library:** In the project folder, run this command.
59+
```bash
60+
pip install ollama
61+
```
62+
63+
3. **Run the script:** Make sure Ollama is running, then run the program.
64+
```bash
65+
python synt_e.py
66+
```
67+
68+
### Usage Examples
69+
Now you can write your requests. The program will send them to your local model and return the translation in Synt-E.
70+
71+
**Example 1: Technical Request**
72+
> **YOU >** Write a Python script that uses Keras for sentiment analysis.
73+
>
74+
> **AI >** `task:write_script language:python libraries:keras model:RNN dataset:movie_reviews task:sentiment_analysis`
75+
76+
**Example 2: Creative Request**
77+
> **YOU >** Generate an image of a red dragon, in watercolor style.
78+
>
79+
> **AI >** `task:generate_image subject:red_dragon style:watercolor`
80+
81+
**Example 3: Complex Request**
82+
> **YOU >** Prepare a PowerPoint presentation for the quarterly meeting with the CEO on the topic of sales.
83+
>
84+
> **AI >** `task:create_presentation format:powerpoint event:quarterly_meeting audience:ceo topic:sales`
85+
86+
---
87+
88+
## 🏗️ The Future of the Project
89+
This script is just a prototype. The complete architecture of Synt-E (which we have explored) includes:
90+
- A **hybrid engine** that uses fast rules for simple commands.
91+
- A **security** system to block sensitive data.
92+
- An **ecosystem** with extensions for editors like VS Code.
93+
94+
Have fun compiling your thoughts!

backup/synt_e.py

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
import time
2+
import logging
3+
import ollama
4+
5+
# --- CONFIGURATION ---
6+
# Your local model (make sure it's downloaded)
7+
MODEL_NAME = "gpt-oss:20b-unlocked"
8+
9+
# --- LOGGING ---
10+
logging.basicConfig(level=logging.INFO, format='%(asctime)s | %(message)s', datefmt='%H:%M:%S')
11+
logger = logging.getLogger("Synt-E")
12+
13+
# --- THE BRAIN (System Prompt) ---
14+
# Here we give strict instructions to Llama to act like a compiler
15+
SYSTEM_PROMPT = """
16+
CRITICAL ROLE: You are a Synt-E compiler. Translate user requests into a token-efficient, single-line command.
17+
18+
OUTPUT RULES (MANDATORY):
19+
1. **NO QUOTES:** Use snake_case for multi-word values (e.g., "quarterly meeting" becomes quarterly_meeting).
20+
2. **NO ARROWS (->):** Chain actions by simple succession if necessary.
21+
3. **NO EXPLANATIONS:** Output ONLY the final command string.
22+
4. **BE COMPLETE:** Capture all critical details like format, quantity, or specific names.
23+
24+
--- EXAMPLES ---
25+
User: "Prepare a PowerPoint presentation for the Q3 meeting"
26+
AI: task:create_presentation format:powerpoint event:quarterly_meeting topic:Q3_sales
27+
28+
User: "Search for X and then filter by Y"
29+
AI: task:search topic:X filter:Y
30+
31+
User: "Write a python script"
32+
AI: task:code lang:python
33+
--- END EXAMPLES ---
34+
"""
35+
36+
def process_with_ai(text):
37+
start = time.time()
38+
logger.info(f"🧠 Sending to {MODEL_NAME}...")
39+
40+
try:
41+
response = ollama.chat(model=MODEL_NAME, messages=[
42+
{'role': 'system', 'content': SYSTEM_PROMPT},
43+
{'role': 'user', 'content': text},
44+
])
45+
46+
result = response['message']['content'].strip()
47+
duration = time.time() - start
48+
49+
# Sometimes models get chatty, let's clean up any backticks or markdown
50+
result = result.replace("`", "").replace("Here is the Synt-E:", "").strip()
51+
52+
return result, duration
53+
54+
except Exception as e:
55+
return f"ERROR: {e}", 0
56+
57+
# --- MAIN LOOP ---
58+
def main():
59+
print(f"\n==================================================")
60+
print(f" SYNT-E PURE AI (Powered by {MODEL_NAME})")
61+
print(f" Mode: PURE AI (No Regex)")
62+
print(f"==================================================\n")
63+
64+
while True:
65+
user_input = input("YOU > ").strip()
66+
if not user_input: continue
67+
if user_input.lower() in ["exit"]:
68+
break
69+
70+
# Direct call to the AI
71+
synt_e_code, time_taken = process_with_ai(user_input)
72+
73+
print(f"AI > {synt_e_code}")
74+
print(f" (Time: {time_taken:.2f}s)\n")
75+
76+
if __name__ == "__main__":
77+
main()

italian_version/README.md

Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
# Synt-E: Il Protocollo per Parlare con le AI 🚀
2+
3+
Synt-E è un "linguaggio" progettato per dare istruzioni alle Intelligenze Artificiali (LLM) nel modo più efficiente possibile. Invece di scrivere frasi lunghe, si usano comandi brevi e densi che l'AI capisce meglio, più in fretta e con un costo minore.
4+
5+
---
6+
7+
## 🤔 Perché Esiste Synt-E? Il Problema
8+
9+
Quando parliamo con un'AI come ChatGPT, usiamo il linguaggio umano, pieno di parole inutili per una macchina.
10+
11+
**PRIMA (Linguaggio Naturale):**
12+
> "Ciao, per favore, potresti scrivermi uno script in Python per analizzare i dati di un file CSV?"
13+
*(Tante parole, tanti "token", rischio di ambiguità)*
14+
15+
**DOPO (Synt-E):**
16+
> `task:code lang:python action:analyze_data format:csv`
17+
*(Poche parole, zero ambiguità, massima efficienza)*
18+
19+
---
20+
21+
## ✨ Come Funziona la Magia? La Logica dietro Synt-E
22+
23+
Il segreto è semplice: **le AI moderne sono state addestrate su quasi tutto Internet, e la maggior parte di Internet è in Inglese.**
24+
25+
Hanno visto **miliardi di pattern** di codice, comandi da terminale, file di configurazione e testi tecnici in inglese. Per loro, l'inglese tecnico non è una lingua, è la loro **lingua madre**.
26+
27+
- **L'Inglese Tecnico è un'autostrada:** Dare un comando in Synt-E è come imboccare l'autostrada. La richiesta arriva a destinazione velocemente e senza intoppi.
28+
- **Le altre lingue sono strade di campagna:** L'AI le capisce, ma deve "tradurre" e "interpretare" di più, sprecando tempo e risorse.
29+
30+
### I Vantaggi Concreti
31+
1. **💰 Risparmio di Token (e Soldi):** Meno parole significa meno "gettoni" (token) da pagare se usi un servizio a pagamento. In locale, significa meno carico sulla tua CPU/GPU.
32+
2. **⚡ Velocità Superiore:** L'AI non deve pensare a come interpretare le tue gentilezze. Va dritta al punto, dandoti una risposta più in fretta.
33+
3. **✅ Risposte Migliori:** Eliminando l'ambiguità, riduci il rischio che l'AI fraintenda e ti dia una risposta sbagliata o incompleta.
34+
35+
---
36+
37+
## 💻 Prova Subito sul Tuo PC! (con Ollama)
38+
39+
Questo progetto include un semplice programma Python che trasforma le tue frasi in italiano (o qualsiasi altra lingua) nel protocollo Synt-E, usando un'AI che gira **gratis e offline** sul tuo computer.
40+
41+
### Passo 1: Prerequisiti
42+
1. **Python:** Assicurati di averlo installato. Se non ce l'hai, scaricalo da [python.org](https://python.org).
43+
2. **Ollama:** Installa Ollama per far girare le AI in locale. Scaricalo da [ollama.com](https://ollama.com).
44+
45+
### Passo 2: Scegli il Modello Giusto (IMPORTANTE)
46+
Non tutti i modelli AI sono adatti a questo compito.
47+
- **Modelli "Assistente" (come Llama 3.1 Instruct):** Sono troppo "servizievoli". Se gli chiedi di tradurre una richiesta per scrivere codice, loro scriveranno il codice invece di tradurla. **Sono i meno adatti.**
48+
- **Modelli "Grezzi" o "Senza filtri" (come GPT-OSS o Dolphin):** Sono più flessibili e obbedienti. Capiscono il loro ruolo di "compilatore" e non cercano di eseguire il compito al posto tuo. **Sono i migliori per questo script.**
49+
50+
Dalla tua lista, il vincitore è stato **`gpt-oss:20b-unlocked`**.
51+
52+
### Passo 3: Installa e Avvia
53+
1. **Scarica il modello:** Apri il terminale e lancia questo comando.
54+
```bash
55+
ollama pull gpt-oss:20b-unlocked
56+
```
57+
58+
2. **Installa la libreria:** Nella cartella del progetto, lancia questo comando.
59+
```bash
60+
pip install ollama
61+
```
62+
63+
3. **Avvia lo script:** Assicurati che Ollama sia in esecuzione, poi lancia il programma.
64+
```bash
65+
python synt_e.py
66+
```
67+
68+
### Esempi d'Uso
69+
Ora puoi scrivere le tue richieste. Il programma le invierà al tuo modello locale e ti restituirà la traduzione in Synt-E.
70+
71+
**Esempio 1: Richiesta Tecnica**
72+
> **TU >** Scrivi uno script in Python che usa Keras per fare sentiment analysis.
73+
>
74+
> **AI >** `task:write_script language:python libraries:keras model:RNN dataset:movie_reviews task:sentiment_analysis`
75+
76+
**Esempio 2: Richiesta Creativa**
77+
> **TU >** Genera l'immagine di un drago rosso, in stile acquerello.
78+
>
79+
> **AI >** `task:generate_image subject:red_dragon style:watercolor`
80+
81+
**Esempio 3: Richiesta Complessa**
82+
> **TU >** Prepara una presentazione in PowerPoint per il meeting trimestrale con il CEO sul tema delle vendite.
83+
>
84+
> **AI >** `task:create_presentation format:powerpoint event:quarterly_meeting audience:ceo topic:sales`
85+
86+
---
87+
88+
## 🏗️ Il Futuro del Progetto
89+
Questo script è solo un prototipo. L'architettura completa di Synt-E (che abbiamo esplorato) include:
90+
- Un **motore ibrido** che usa regole veloci per i comandi semplici.
91+
- Un sistema di **sicurezza** per bloccare dati sensibili.
92+
- Un **ecosistema** con estensioni per editor come VS Code.
93+
94+
Buon divertimento a compilare i tuoi pensieri!

0 commit comments

Comments
 (0)