|
| 1 | +# Synt-E: The Protocol for Talking to AIs 🚀 |
| 2 | + |
| 3 | +Synt-E is a "language" designed to give instructions to Artificial Intelligences (LLMs) as efficiently as possible. Instead of writing long sentences, you use short, dense commands that the AI understands better, faster, and at a lower cost. |
| 4 | + |
| 5 | +--- |
| 6 | + |
| 7 | +## 🤔 Why Does Synt-E Exist? The Problem |
| 8 | + |
| 9 | +When we talk to an AI like ChatGPT, we use human language, which is full of words that are useless to a machine. |
| 10 | + |
| 11 | +**BEFORE (Natural Language):** |
| 12 | +> "Hi, could you please write me a Python script to analyze data from a CSV file?" |
| 13 | +*(Too many words, too many "tokens", risk of ambiguity)* |
| 14 | + |
| 15 | +**AFTER (Synt-E):** |
| 16 | +> `task:code lang:python action:analyze_data format:csv` |
| 17 | +*(Few words, zero ambiguity, maximum efficiency)* |
| 18 | + |
| 19 | +--- |
| 20 | + |
| 21 | +## ✨ How Does the Magic Work? The Logic Behind Synt-E |
| 22 | + |
| 23 | +The secret is simple: **modern AIs have been trained on almost the entire Internet, and most of the Internet is in English.** |
| 24 | + |
| 25 | +They have seen **billions of patterns** of code, terminal commands, configuration files, and technical texts in English. For them, technical English is not a language; it is their **native language**. |
| 26 | + |
| 27 | +- **Technical English is a highway:** Giving a command in Synt-E is like getting on the highway. The request reaches its destination quickly and smoothly. |
| 28 | +- **Other languages are country roads:** The AI understands them, but it has to "translate" and "interpret" more, wasting time and resources. |
| 29 | + |
| 30 | +### The Concrete Advantages |
| 31 | +1. **💰 Token Savings (and Money):** Fewer words mean fewer "tokens" to pay for if you use a paid service. Locally, it means less load on your CPU/GPU. |
| 32 | +2. **⚡ Superior Speed:** The AI doesn't have to think about how to interpret your pleasantries. It gets straight to the point, giving you an answer faster. |
| 33 | +3. **✅ Better Answers:** By eliminating ambiguity, you reduce the risk of the AI misunderstanding and giving you a wrong or incomplete answer. |
| 34 | + |
| 35 | +--- |
| 36 | + |
| 37 | +## 💻 Try It Now on Your PC! (with Ollama) |
| 38 | + |
| 39 | +This project includes a simple Python program that transforms your sentences in Italian (or any other language) into the Synt-E protocol, using an AI that runs **free and offline** on your computer. |
| 40 | + |
| 41 | +### Step 1: Prerequisites |
| 42 | +1. **Python:** Make sure you have it installed. If you don't, download it from [python.org](https://python.org). |
| 43 | +2. **Ollama:** Install Ollama to run AIs locally. Download it from [ollama.com](https://ollama.com). |
| 44 | + |
| 45 | +### Step 2: Choose the Right Model (IMPORTANT) |
| 46 | +Not all AI models are suitable for this task. |
| 47 | +- **"Assistant" Models (like Llama 3.1 Instruct):** They are too "helpful." If you ask them to translate a request to write code, they will write the code instead of translating it. **They are the least suitable.** |
| 48 | +- **"Raw" or "Unfiltered" Models (like GPT-OSS or Dolphin):** They are more flexible and obedient. They understand their role as a "compiler" and do not try to perform the task for you. **They are the best for this script.** |
| 49 | + |
| 50 | +From your list, the winner was **`gpt-oss:20b-unlocked`**. |
| 51 | + |
| 52 | +### Step 3: Install and Run |
| 53 | +1. **Download the model:** Open the terminal and run this command. |
| 54 | + ```bash |
| 55 | + ollama pull gpt-oss:20b-unlocked |
| 56 | + ``` |
| 57 | + |
| 58 | +2. **Install the library:** In the project folder, run this command. |
| 59 | + ```bash |
| 60 | + pip install ollama |
| 61 | + ``` |
| 62 | + |
| 63 | +3. **Run the script:** Make sure Ollama is running, then run the program. |
| 64 | + ```bash |
| 65 | + python synt_e.py |
| 66 | + ``` |
| 67 | + |
| 68 | +### Usage Examples |
| 69 | +Now you can write your requests. The program will send them to your local model and return the translation in Synt-E. |
| 70 | + |
| 71 | +**Example 1: Technical Request** |
| 72 | +> **YOU >** Write a Python script that uses Keras for sentiment analysis. |
| 73 | +> |
| 74 | +> **AI >** `task:write_script language:python libraries:keras model:RNN dataset:movie_reviews task:sentiment_analysis` |
| 75 | + |
| 76 | +**Example 2: Creative Request** |
| 77 | +> **YOU >** Generate an image of a red dragon, in watercolor style. |
| 78 | +> |
| 79 | +> **AI >** `task:generate_image subject:red_dragon style:watercolor` |
| 80 | + |
| 81 | +**Example 3: Complex Request** |
| 82 | +> **YOU >** Prepare a PowerPoint presentation for the quarterly meeting with the CEO on the topic of sales. |
| 83 | +> |
| 84 | +> **AI >** `task:create_presentation format:powerpoint event:quarterly_meeting audience:ceo topic:sales` |
| 85 | + |
| 86 | +--- |
| 87 | + |
| 88 | +## 🏗️ The Future of the Project |
| 89 | +This script is just a prototype. The complete architecture of Synt-E (which we have explored) includes: |
| 90 | +- A **hybrid engine** that uses fast rules for simple commands. |
| 91 | +- A **security** system to block sensitive data. |
| 92 | +- An **ecosystem** with extensions for editors like VS Code. |
| 93 | + |
| 94 | +Have fun compiling your thoughts! |
0 commit comments