Skip to content

Commit 4b58881

Browse files
committed
Added command-line options for port and ml endpoint
1 parent b3f8ba1 commit 4b58881

File tree

2 files changed

+33
-17
lines changed

2 files changed

+33
-17
lines changed

README.md

Lines changed: 16 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,13 @@
22

33
A demo web app for MAX using the Image Caption Generator model
44

5-
### Before Starting the Web App
5+
## Before Starting the Web App
66

7-
Before starting this web app you must setup the MAX Image Caption Generator REST endpoint by following the README here:
7+
Before starting this web app you must setup the MAX Image Caption Generator REST endpoint by following the README at:
88

9-
https://github.com/IBM/MAX-Image-Caption-Generator
9+
[MAX Image Caption Generator GitHub](https://github.com/IBM/MAX-Image-Caption-Generator)
1010

11-
You need to have the Image Caption Generator REST endpoint running at `http://localhost:5000` for the web app to run.
12-
13-
### Starting the Web App
11+
## Starting the Web App
1412

1513
Before running this web app you must install it's dependencies:
1614

@@ -20,7 +18,17 @@ You then start the web app by running:
2018

2119
python app.py
2220

23-
### Instructions for Docker
21+
Once it's finished processing the default images (< 1 minute) you can then access the web app at:
22+
[http://localhost:8088](http://localhost:8088)
23+
24+
The Image Caption Generator endpoint must be available at `http://localhost:5000` for the web app to successfully start.
25+
26+
If you want to use a different port or are running the ML endpoint at a different location
27+
you can change them with command-line options:
28+
29+
python app.py --port=[new port] --ml-endpoint=[endpoint url including protocol and port]
30+
31+
## Instructions for Docker
2432

2533
To run the web app with Docker you need to allow the containers running the web
2634
server and the REST endpoint to share the same network stack. This is done in
@@ -39,7 +47,7 @@ Run the web app container using:
3947

4048
docker run --net='container:max-im2txt' -it webapp
4149

42-
### JavaScript Libraries
50+
## JavaScript Libraries
4351

4452
This web app includes the following js and css libraries
4553

app.py

Lines changed: 17 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,15 @@
11
import collections, json, logging, os, requests, signal, time
22
from tornado import httpserver, ioloop, web
3+
from tornado.options import define, options, parse_command_line
34

5+
# Command Line Options
6+
define("port", default=8088, help="Port the web app will run on")
7+
define("ml-endpoint", default="http://localhost:5000", help="The Image Caption Generator REST endpoint")
48

59
# Setup Logging
610
logging.basicConfig(level=os.environ.get("LOGLEVEL", "INFO"), format='%(levelname)s: %(message)s')
711

812
# Global variables
9-
ml_endpoint = "http://localhost:5000/model/predict"
1013
static_img_path = "static/img/images/"
1114
temp_img_prefix = "MAX-"
1215
image_captions = collections.OrderedDict()
@@ -80,7 +83,7 @@ def signal_handler(sig, frame):
8083
def shutdown():
8184
logging.info("Cleaning up image files")
8285
clean_up()
83-
logging.info("Stopping server")
86+
logging.info("Stopping web server")
8487
server.stop()
8588
ioloop.IOLoop.current().stop()
8689

@@ -101,27 +104,32 @@ def make_app():
101104

102105

103106
def main():
107+
parse_command_line()
108+
109+
global ml_endpoint
110+
ml_endpoint = options.ml_endpoint + "/model/predict"
111+
logging.debug("Connecting to ML endpoint at %s", ml_endpoint)
112+
104113
try:
105114
resp = requests.get(ml_endpoint)
106115
except requests.exceptions.ConnectionError:
107-
logging.error("Cannot connect to the Object Detection API")
108-
logging.error("Please run the Object Detection API docker image first")
116+
logging.error("Cannot connect to the Image Caption Generator REST endpoint at %s", options.ml_endpoint)
109117
raise SystemExit
110118

111-
logging.info("Starting Server")
112-
global server
119+
logging.info("Starting web server")
113120
app = make_app()
121+
global server
114122
server = httpserver.HTTPServer(app)
115-
server.listen(8088)
123+
server.listen(options.port)
116124
signal.signal(signal.SIGINT, signal_handler)
117125

118-
logging.info("Preparing ML Metadata")
126+
logging.info("Preparing ML metadata")
119127
start = time.time()
120128
prepare_metadata()
121129
end = time.time()
122130
logging.info("Metadata prepared in %s seconds", end - start)
123131

124-
logging.info("Use Ctrl+C to stop server")
132+
logging.info("Use Ctrl+C to stop web server")
125133
ioloop.IOLoop.current().start()
126134

127135

0 commit comments

Comments
 (0)