Using OpenAI API to create an AI assistant for our IoT projects #9688
jgpeiro
started this conversation in
Show and tell
Replies: 1 comment
-
I forgot to comment here, but few weeks ago I uploaded the finished toy project to hackaday.io As said, it can record voice via i2s microphone, transcribe and translate via Whisper model (or Google API) and then convert your requests to micropython code via Codex API. The project documentation The project source code |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello All,
I have found a method to use OpenAI Codex as AI assistant for our IoT projects.
Perhaps that we can access to OpenAI API via urequests with a simple code and convert any IoT device to an cool AI assistant.
The code to make a request is this:
You need your own API key, but the price is too low for any personal use (0.02$/1K tokens)
A typical exampe can be like this one, where you provide a promp with a "casual" HAL definition and then ask things to the assistant and execute the response as micropython code via eval/exec/import

It can handle common micrpython machine classes, timers, network, sockets, asyncio, framebuf or even asyncio. And if you have a custom API, you can simply add in the conversation as example.
For ESP32 devices with enought RAM, a Google voice-to-text can be used:
Another usage example is this one, where a complete new pong game is created every minute (about 200 lines of python code) and then executed on the second core of the PICO_W. A video and prompt used is shown here:
https://www.youtube.com/watch?v=TgRffMAOubQ
Beta Was this translation helpful? Give feedback.
All reactions