Skip to content

DominiqueYeo/OmNomNicient-Project4

Repository files navigation

OmNomNicient

Table of Contents
  1. About the Project
  2. Features
  3. Technologies Used
  4. Rationale for Choice of Technologies
  5. Contributors

About the Project

OmNomNicient is designed to help users identify dishes based on photos and recommend restaurants selling that dish.
It allows users who are unfamiliar with dish names to easily identify dishes and find places around them selling the dish.

Features

1. Search For Dishes

  • Users will upload an image of the dish to be identified.
  • They would then input the address where they would like to find restaurants selling that dish.
  • Upon submission, OmNomNicient identifies the dish and recommends restaurants near the inputted address.
How We Built This:
  1. Identify Dish:
    When an image is uploaded, it is passed through a TensorFlow supervised learning, machine learning model which identifies the dish.
  2. Identify Address Coordinates:
    The address inputted by the user is then passed through Google Maps' Geocoding API which identifies the latitude and longitude coordinates of the address.
  3. Find Restaurants Near The Address, Selling The Dish:
    The identified dish and latitude and longitude coordinates are then passed through Google Maps' Places API which identifies restaurants selling the dish.
  4. Find Restaurant Photos:
    A third API call is made through Google Maps' Places API to retrieve the restaurant images.

hokkien-mee-gif

pizza-gif

2. Bookmark Favourite Restaurants

  • Add restaurants to the favourites category to keep track of them.

fav-gif-new

3. Add Restaurants To Past Eats

  • Remember the places you've tried before.

pasteats-gif

4. Move Restaurants From Favourites to Past Eats & Delete Them

  • Easily favourite a restaurant that you've eaten at and delete it from the past eats category.

fav-to-pasteats-gif

5. Look Through Previous Searches

  • Look through your history of previous searches.
  • View search results again.

past-searches-gif

6. Characteristics of Results

See the following for each restaurant:
  • Restaurant name
  • Restaurant rating
  • Restaurant address
  • Restaurant image

Screenshot 2022-04-25 at 5 44 00 PM

Technologies Used

Frontend

User Interface:

Component Routing:

Backend

Server:

Database:

Authentication:

Image Detection:

Restaurant Search:

Rationale for Choice of Technologies

PostgreSQL

Reason for Choosing a SQL Database:
  • Given the small size and complexity of our app, we did not have much data to store and data did not have a lot of relationships with other data. Hence, we did not require the ability to embed data, provided by a NoSQL database.

Google Maps API

  • We wanted to allow users to be able to type in their address and use this to search for restaurants nearby.
  • Google Maps API provided a variety of APIs which allowed us to convert the users address into latitude and longitude coordinates and use these to search for restaurants serving the identified dish.
  • Additionally, the Google Places API allowed us to customize our search parameters to get more accurate location results.

TensorFlow

  • Our desire to use a machine learning model to identify dishes in an image led us to TensorFlow.
  • TensorFlows' ability to train a machine learning model to identify objects in an image suited this need.

JSON Web Token

  • We chose JSON Web Tokens as our method of authentication due to its increased security through its digital signature capabilities.

Contributors

Dominique Yeo | LinkedIn

Shannon Suresh | LinkedIn

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors