|
| 1 | +# Getting Started with Groundlight |
1 | 2 |
|
2 | | -# Getting Started |
3 | | - |
4 | | -## How to Build a Computer Vision Application with Groundlight's Python SDK |
5 | | - |
6 | | -If you're new to Groundlight AI, this is a good place to start. This is the equivalent of building a "Hello, world!" application. |
7 | | - |
8 | | -Don't code? [Reach out to Groundlight AI](mailto:support@groundlight.ai) so we can build a custom computer vision application for you. |
9 | | - |
10 | | - |
11 | | -### What's below? |
12 | | - |
13 | | - - [Prerequisites](#prerequisites) |
14 | | - - [Environment Setup](#environment-setup) |
15 | | - - [Authentication](#authentication) |
16 | | - - [Writing the code](#writing-the-code) |
17 | | - - [Using your application](#using-your-computer-vision-application) |
| 3 | +## Build Computer Vision Applications in Minutes with Groundlight's Python SDK |
18 | 4 |
|
| 5 | +Welcome to Groundlight AI! This guide will help you build your first computer vision application quickly and easily using our Python SDK. |
19 | 6 |
|
| 7 | +Don't code? [Contact our team](mailto:support@groundlight.ai) and we'll build a custom solution for you. |
20 | 8 |
|
21 | 9 | ### Prerequisites |
| 10 | +Before you begin, make sure you have: |
| 11 | +1. A [Groundlight account](https://dashboard.groundlight.ai/) |
| 12 | +2. An API token from the [Groundlight dashboard](https://dashboard.groundlight.ai/reef/my-account/api-tokens) |
| 13 | +3. Python 3.9 or newer |
22 | 14 |
|
23 | | -Before getting started: |
24 | | - |
25 | | -- Make sure you have python installed |
26 | | -- Install VSCode |
27 | | -- Make sure your device has a c compiler. On Mac, this is provided through XCode while in Windows you can use the Microsoft Visual Studio Build Tools |
28 | | - |
29 | | -### Environment Setup |
30 | | - |
31 | | -Before you get started, you need to make sure you have python installed. Additionally, it’s good practice to set up a dedicated environment for your project. |
| 15 | +### Installing the Groundlight SDK |
32 | 16 |
|
33 | | -You can download python from https://www.python.org/downloads/. Once installed, you should be able to run the following in the command line to create a new environment |
| 17 | +You can install the Groundlight SDK via pip. When installing python packages, it is a best practice to install them |
| 18 | +inside of virtual environments. Run the following in the command line to create a new environment: |
| 19 | +```bash |
| 20 | +python3 -m venv groundlight-env |
| 21 | +``` |
34 | 22 |
|
35 | | - ```bash |
36 | | - python3 -m venv gl_env |
37 | | - ``` |
38 | | -Once your environment is created, you can activate it with |
39 | | - ```bash |
40 | | - source gl_env/bin/activate |
41 | | - ``` |
42 | | -For Linux and Mac or if you’re on Windows you can run |
43 | | - ```bash |
44 | | - gl_env\Scripts\activate |
45 | | - ``` |
46 | | -The last step to setting up your python environment is to run |
47 | | - ```bash |
48 | | - pip install groundlight |
49 | | - pip install framegrab |
50 | | - ``` |
51 | | -in order to download Groundlight’s SDK and image capture libraries. |
| 23 | +Activate the virtual environment using |
| 24 | +- On macOS or Linux, `source groundlight-env/bin/activate` |
| 25 | +- On Windows, `.\groundlight-env\Scripts\activate` |
52 | 26 |
|
| 27 | +Now, you can install the Groundlight SDK using pip: |
| 28 | +```bash |
| 29 | +pip install groundlight |
| 30 | +``` |
53 | 31 |
|
| 32 | +For more detailed installation instructions, see the [installation guide](/docs/installation/). |
54 | 33 |
|
55 | 34 | ### Authentication |
56 | 35 |
|
57 | 36 | In order to verify your identity while connecting to your custom ML models through our SDK, you’ll need to create an API token. |
58 | 37 |
|
59 | | -1. Head over to [https://dashboard.groundlight.ai/](https://dashboard.groundlight.ai/) and create or log into your account |
| 38 | +1. Log into your previously created account at https://dashboard.groundlight.ai/ |
60 | 39 |
|
61 | 40 | 2. Once in, click on your username in the upper right hand corner of your dashboard: |
62 | 41 |
|
63 | 42 |  |
64 | 43 |
|
65 | | -3. Select API Tokens, then enter a name, like ‘personal-laptop-token’ for your api token. |
| 44 | +3. Select API Tokens, then enter a name for your api token, such as `personal-laptop-token`. |
66 | 45 |
|
67 | 46 |  |
68 | 47 |
|
69 | 48 | 4. Copy the API Token for use in your code |
70 | 49 |
|
71 | | -IMPORTANT: Keep your API token secure! Anyone who has access to it can impersonate you and will have access to your Groundlight data |
| 50 | +Set the API token in your terminal: |
| 51 | +```bash |
| 52 | +# MacOS / Linux |
| 53 | +export GROUNDLIGHT_API_TOKEN='your-api-token' |
| 54 | +``` |
| 55 | +```powershell |
| 56 | +# Windows |
| 57 | +setx GROUNDLIGHT_API_TOKEN "your-api-token" |
| 58 | +``` |
| 59 | +:::important |
| 60 | +Keep your API token secure! Anyone who has access to it can impersonate you and can access to your Groundlight data. |
| 61 | +::: |
72 | 62 |
|
| 63 | +### Call the Groundlight API |
73 | 64 |
|
74 | | - ```bash |
75 | | - $env:GROUNDLIGHT_API_TOKEN="YOUR_API_TOKEN_HERE" |
76 | | - ``` |
77 | | -Or on Mac |
78 | | - ```bash |
79 | | - export GROUNDLIGHT_API_TOKEN="YOUR_API_TOKEN_HERE" |
80 | | - ``` |
81 | | - |
82 | | - |
83 | | -### Writing the code |
84 | | - |
85 | | -For your first and simple application you can build a binary detector, which is computer vision model where the answer will either be 'Yes' or 'No'. Groundlight AI will confirm if the thumb is facing up or down ("Is the thumb facing up?"). |
| 65 | +Call the Groundlight API by creating a `Detector` and submitting an `ImageQuery`. |
86 | 66 |
|
87 | 67 | You can start using Groundlight using just your laptop camera, but you can also use a USB camera if you have one. |
88 | 68 |
|
89 | | - ```python |
90 | | - import groundlight |
91 | | - import cv2 |
92 | | - from framegrab import FrameGrabber |
93 | | - import time |
94 | | - |
95 | | - gl = groundlight.Groundlight() |
96 | | - |
97 | | - detector_name = "trash_detector" |
98 | | - detector_query = "Is the trash can overflowing" |
99 | | - |
100 | | - detector = gl.get_or_create_detector(detector_name, detector_query) |
101 | | - |
102 | | - grabber = list(FrameGrabber.autodiscover().values())[0] |
103 | | - |
104 | | - WAIT_TIME = 5 |
105 | | - last_capture_time = time.time() - WAIT_TIME |
106 | | - |
107 | | - while True: |
108 | | - frame = grabber.grab() |
109 | | - |
110 | | - cv2.imshow('Video Feed', frame) |
111 | | - key = cv2.waitKey(30) |
112 | | - |
113 | | - if key == ord('q'): |
114 | | - break |
115 | | - # # Press enter to submit an image query |
116 | | - # elif key in (ord('\r'), ord('\n')): |
117 | | - # print(f'Asking question: {detector_query}') |
118 | | - # image_query = gl.submit_image_query(detector, frame) |
119 | | - # print(f'The answer is {image_query.result.label.value}') |
120 | | - |
121 | | - # # Press 'y' or 'n' to submit a label |
122 | | - # elif key in (ord('y'), ord('n')): |
123 | | - # if key == ord('y'): |
124 | | - # label = 'YES' |
125 | | - # else: |
126 | | - # label = 'NO' |
127 | | - # image_query = gl.ask_async(detector, frame, human_review="NEVER") |
128 | | - # gl.add_label(image_query, label) |
129 | | - # print(f'Adding label {label} for image query {image_query.id}') |
130 | | - |
131 | | - # Submit image queries in a timed loop |
132 | | - now = time.time() |
133 | | - if last_capture_time + WAIT_TIME < now: |
134 | | - last_capture_time = now |
135 | | - |
136 | | - print(f'Asking question: {detector_query}') |
137 | | - image_query = gl.submit_image_query(detector, frame) |
138 | | - print(f'The answer is {image_query.result.label.value}') |
139 | | - |
140 | | - grabber.release() |
141 | | - cv2.destroyAllWindows() |
142 | | - ``` |
143 | | - This code will take an image from your connected camera every minute and ask Groundlight a question in natural language, before printing out the answer. |
144 | | - |
145 | | - |
146 | | - |
147 | | -### Using your computer vision application |
148 | | - |
149 | | -Just like that, you have a complete computer vision application. You can change the code and configure a detector for your specific use case. Also, you can monitor and improve the performance of your detector at [https://dashboard.groundlight.ai/](https://dashboard.groundlight.ai/). Groundlight’s human-in-the-loop technology will monitor your image feed for unexpected changes and anomalies, and by verifying answers returned by Groundlight you can improve the process. At app.groundlight.ai, you can also set up text and email notifications, so you can be alerted when something of interest happens in your video stream. |
150 | | - |
151 | | - |
152 | | - |
153 | | -### If You're Looking for More: |
154 | | - |
155 | | -Now that you've built your first application, learn how to [write queries](https://code.groundlight.ai/python-sdk/docs/getting-started/writing-queries). |
156 | | - |
157 | | -Want to play around with sample applications built by Groundlight AI? Visit [Guides](https://www.groundlight.ai/guides) to build example applications, from detecting birds outside of your window to running Groundlight AI on a Raspberry Pi. |
| 69 | +```python title="ask.py" |
| 70 | +from groundlight import Groundlight, Detector, ImageQuery |
| 71 | + |
| 72 | +gl = Groundlight() |
| 73 | + |
| 74 | +detector = gl.get_or_create_detector(detector_name, detector_query) |
| 75 | +detector: Detector = gl.get_or_create_detector( |
| 76 | + name="thumbs-up", |
| 77 | + query="Is the thumb facing up?", |
| 78 | +) |
| 79 | + |
| 80 | +img = "./docs/static/img/thumb.jpg" # Image can be a file or a Python object |
| 81 | +image_query = gl.submit_image_query(detector=det, image=img) |
| 82 | + |
| 83 | +print(f"The answer is {image_query.result.label}") |
| 84 | +print(image_query) |
| 85 | +``` |
| 86 | + |
| 87 | +Run the code using `python ask.py`. The code will submit an image to the Groundlight API and print the result: |
| 88 | +``` |
| 89 | +The answer is NO |
| 90 | +ImageQuery( |
| 91 | + id='iq_2pL5wwlefaOnFNQx1X6awTOd119', |
| 92 | + query="Is the thumb facing up?", |
| 93 | + detector_id='det_2owcsT7XCsfFlu7diAKgPKR4BXY', |
| 94 | + result=BinaryClassificationResult( |
| 95 | + confidence=0.9995857543478209, |
| 96 | + label=<Label.YES: 'YES'> |
| 97 | + ), |
| 98 | + created_at=datetime.datetime(2025, 2, 25, 11, 5, 57, 38627, tzinfo=tzutc()), |
| 99 | + patience_time=30.0, |
| 100 | + confidence_threshold=0.9, |
| 101 | + type=<ImageQueryTypeEnum.image_query: 'image_query'>, |
| 102 | + result_type=<ResultTypeEnum.binary_classification: 'binary_classification'>, |
| 103 | + metadata=None |
| 104 | +) |
| 105 | +``` |
| 106 | + |
| 107 | +For more information on the Groundlight SDK, see the [API Reference](/docs/api-reference/), or check out our [guide to building applications with the Groundlight SDK](/docs/guide/). |
| 108 | + |
| 109 | +### Using Your Computer Vision Application |
| 110 | + |
| 111 | +Congratulations! You now have a fully functional computer vision application. You can easily customize the code and configure detectors for your specific use cases. |
| 112 | + |
| 113 | +Monitor and enhance your detector's performance through the [Groundlight Dashboard](https://dashboard.groundlight.ai/). |
| 114 | +Groundlight's human-in-the-loop technology intelligently monitors your image feed for anomalies and unexpected changes. |
| 115 | +By reviewing and verifying results, you continuously improve the system's accuracy. Through the dashboard, you can also |
| 116 | +configure text and email notifications to alert you when important events are detected in your video stream. |
| 117 | + |
| 118 | +### Next Steps |
| 119 | + |
| 120 | +Now that you've built your first application, learn how to write effective [queries](/docs/getting-started/writing-queries). |
| 121 | + |
| 122 | +Ready to explore more possibilities? Visit our [Guides](https://www.groundlight.ai/guides) to discover sample |
| 123 | +applications built with Groundlight AI — from bird detection systems to Raspberry Pi implementations. |
0 commit comments