-
Notifications
You must be signed in to change notification settings - Fork 0
11. Profiles
A profile is a settings file that defines the type of translation service and the settings needed to use that service. You can create your own profile or modify an existing one. You can also code your own translation service type.
For AI services like OpenAI or LM Studio running locally, the host API is very similar, but there are differences, and further differences between the different models used. Some models can return json responsed. Some can do structured output. And some can provide reasoning.
A lot of prompt testing will be needed, and you ca use the Profile Editor for this.
This shows the profile editor for the OpenAI_gpt4_v1 profile. A translation test was attempted and you can see the results. The profile .prf file is also shown.


Important
You don't need to use a paid AI API. You may be able to run a model locally, or use a legacy non-AI existing web service of some kind.
Preliminary testing on a local LM Studio with llama-3.2-1b-instruct
has yielded impressive free, low latency results on a 8GB GTX1080. Judging how fast the AI field is innovating, local free near-perfect translations on typical PC hardware should be here very, very soon.
You only need to touch:
- Edit
\Common\TransFuncs\TTransFuns.cs
- add the name of your Translation Function here. - Add
\Common\TransFuncs\YourFunction\TTF_YourFunction.cs
Your Translation Function is a static class. It must contain the following functions:
public static string GetSettingsPath(string funcType)
public static bool InitGlobal(TLog.eMode mode, string funcType, string fromCulture)
public static bool InitPerCulture(TLog.eMode mode, string funcType, string fromCulture, string toCulture)
public static string Translate(TLog.eMode mode, string funcType, string fromCulture, string toCulture, string textToTranslate, string hintToken)
public static bool DeInitGlobal(TLog.eMode mode, string funcType)
See TTF_OpenAI_1
for an example of how to implement this.
Tip
If you are using AI, most of your time will be spent tweaking the prompts. Sometimes trying to get it to follow your rules and return only the translated text can require a lot of experimentation. Don't forget about the concepts of context (the models short term memory) which is why the OpenAI_1 function sends the prompt for every translated item. If we don't, we may exceed the context length and the model will forget what we were talking about.
We tried a batching approach, but you'd have to know the context length, calculate token consumption before translating, and responding to errors or resuming batches. It was super-complex and not worth it. Translation one-at-a-time is inexpensive, and fast-enough.
To test your Translation Function, use the Translator's Translation Functions
screen. This allows you to send one translation request to your Translation Function and view the results. Enable verbose logging to view TLog.eLogItemType.dbg
messages.