Skip to content

Zeenmo is an AI Call Assistant for Relationship improvement

Notifications You must be signed in to change notification settings

dragonhub0710/zeenmo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Zeenmo

Zeenmo is ChatGPT application solving relationship problems using your own method through a conversation.

  • Integration with ChatGPT(gpt-4o)
  • Implementation Audio Streaming
  • Implementation Speech-to-Text using OpenAI Whisper model

Integration ChatGPT with gpt-4o model

import openai

messages = [
  {"role": "system", "content": "You are Zeenmo. ..."},
  {"role": "user", "content": "How are you today?"},
  {"role": "assistant", "content": "I am fine. Thank you."},
]
response = openai.ChatCompletion.create(
    model="gpt-4o",
    messages=messages,
    temperature=0.2,
)
print(response["choices"][0]["message"]["content"])

Implementation of Audio Streaming

Generation blob data in React.js using react-mic

import { ReactMic } from 'react-mic';

const [record, setRecord] = useState(false);

setRecord(true);  // Start Audio recording

setRecord(false);  // Finish Audio recording

const onStop = (recordedBlob) => {   // Call when the audio recording is finished
  console.log(recordedBlob);
};

<ReactMic
  record={record}
  className="sound-wave hidden"
  onStop={onStop}
  strokeColor="#000000"
  backgroundColor="#FF4081"
  visualSetting="sinewave"
  visualSettingFillColor="#ffffff"
/>

Generation Stream data

navigator.mediaDevices.getUserMedia({ audio: true })   // Check the audio device
.then((stream) => {
  setRecord(true);
  handleStream(stream);   // Process with stream data
})
.catch((error) => {
  console.error('Error accessing microphone:', error);
});

Use Web Audio API

const handleStream = (stream) => {
  let audioCtx = new AudioContext();
  const source = audioCtx.createMediaStreamSource(stream);
  const analyser = audioCtx.createAnalyser();
  analyser.fftSize = 2048;
  const bufferLength = analyser.frequencyBinCount;
  const dataArray = new Uint8Array(bufferLength);
  source.connect(analyser);

  function draw() {
    analyser.getByteTimeDomainData(dataArray);
    for (var x = 0; x < bufferLength; x++) {
      ...
    }

    if (count < 100 * bufferLength) {   // Finish the audio recording when no noise for 3s
      requestAnimationFrame(draw);
    } else {
      setRecord(false);
    }
  }
}

Implementation of Speech-to-Text using OpenAI Whisper model

audio = request.files['file']
params = {
  "language": "en"
}

filename = 'recording.wav'
filepath = os.path.join(os.path.dirname(__file__), filename)
audio.save(filepath)

audio_file= open(filepath, "rb")
transcript = openai.Audio.transcribe("whisper-1", audio_file, **params)
print(transcript["text"])

Generation Lottie Animation for waveform

import Lottie from 'react-lottie';
import animationData from "@/widgets/lottie/waveform";

const defaultOptions = {
  loop: true,
  autoplay: true,
  animationData: animationData,
  rendererSettings: {
    preserveAspectRatio: "xMidYMid slice"
  }
};

<Lottie
  options={defaultOptions}
  height={100}
  isStopped={isStopped}
/>

Generation of the animation using framer-motion library

import { motion } from "framer-motion"

<motion.div
  initial={{ opacity: 0 }}
  animate={{
    transition: {
      duration: 3,
      delay: 1,
    },
    opacity: [0, 1, 1, 0.5, 0],
    y: [0, -50, -50, -150, -400],
  }}
  exit="exit"
>
  I'm Zeenmo!
</motion.div>

Use Mixpanel for tracking

import mixpanel from 'mixpanel-browser';

const mixpanelToken = "XXXXXX";

mixpanel.init(mixpanelToken);

mixpanel.track(id, action);
// mixpanelService.track("audio_recording");
// mixpanelService.track("refreshed", { msg_length: length });

About

Zeenmo is an AI Call Assistant for Relationship improvement

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published