This project is called Gemination.
It is an agriculture helper app that uses Google Gemini 3.
With this app, a farmer can:
- Take a photo or video of a crop
- (Optionally) record audio to describe the problem
- Send it to the app
- Get a diagnosis: crop, possible disease, treatment, and a confidence score
- Ask follow‑up questions in a chat
If the AI is not very confident, it can search the web to double‑check the answer.
If you send your location (lat/lng), the app also uses where you are to adjust the diagnosis.
There is:
- a backend API in the
apifolder (FastAPI + Gemini 3) - a frontend web app in the
webfolder (Next.js/React)
-
Open a terminal in the
apifolder- Example:
Go to the project folder, then:cd api
- Example:
-
Create and activate a Python virtual environment (only needed once)
- On Windows (PowerShell):
python -m venv venv .\venv\Scripts\activate
- On Windows (PowerShell):
-
Install Python packages
pip install -r requirements.txt
-
Set your Gemini API key
- Create a file called
.envinside theapifolder. - Put your key inside like this:
GEMINI_API_KEY=your_key_here GOOGLE_CLOUD_CREDENTIALS_JSON=your json_content_here
- Create a file called
-
Start the backend server
uvicorn main:app --reload
- The API will usually run at
http://localhost:8000. - You can see the docs at
http://localhost:8000/docs.
- The API will usually run at
- User accounts: create user, login, get current user (authentication).
- Diagnosis:
- Receive image, audio, and video.
- Save files in
static_image,static_audio, andstatic_video. - Call Gemini 3 with the media and optional location.
- Get back:
crop,disease,treatment, andconfidence. - If confidence is low, call Gemini again with web search to verify.
- Store the diagnosis in the database.
- Chat:
- Let users ask questions about a specific diagnosis.
- Use Gemini 3 to answer.
- Use the diagnosis info, chat history, and location context.
- Follow confidence rules (say “unsure” when confidence is low).
- Translation:
- Translate between the user’s language and English.
- This is used both for the diagnosis and for chat replies.
-
Open a second terminal in the
webfolder- From the main project folder:
cd web
- From the main project folder:
-
Install Node packages (only needed once)
npm install
-
Set up .env.local environment
- Create a file called .env.local file inside
- Place the backend local host like this:
NEXT_PUBLIC_API_URL=http://localhost:8000
-
Start the dev server
npm run dev
-
Open the app in the browser
- Go to
http://localhost:3000
- Go to
- Auth pages
Register: create a new user (this calls the backend user API).Login: log in the user and store the token.
- Main agriculture flow
- Page to upload images / video of crops.
- Attach audio (or use audio that was transcribed).
- Send location (lat/lng) from the browser if allowed.
- Show the diagnosis result from the backend: crop, disease, treatment, confidence.
- Chat page
- Let the user ask questions about a chosen diagnosis.
- Show AI answers and confidence flag (high/low).
- The web app talks to the backend API using HTTP (usually
localhost:8000). - The backend API talks to Gemini 3 using your
GEMINI_API_KEY. - The database stores:
- users
- diagnoses
- chat history
- Location + media + text all work together to give better crop diagnostics.
If you follow the steps above (backend first, then frontend), you should be able to:
- Register a user
- Log in
- Upload crop media and location
- See a diagnosis and confidence
- Chat with the AI about that diagnosis