Skip to content

Commit a4aed03

Browse files
committed
Update quickstart.mdx
1 parent a2f7600 commit a4aed03

File tree

1 file changed

+128
-208
lines changed

1 file changed

+128
-208
lines changed

quickstart.mdx

Lines changed: 128 additions & 208 deletions
Original file line numberDiff line numberDiff line change
@@ -4,234 +4,154 @@ description: "Start building AI features in under five minutes"
44
mode: "wide"
55
---
66

7-
<Warning>
8-
TO DO: verify CLI URLs and internal reference links Verify CLI commands Add screenshots
9-
</Warning>
10-
11-
This guide walks you through getting started with Hypermode using the
12-
[Simple LLM Prompt Template](https://github.com/hypermodeinc/base-template).
13-
14-
You’ll also learn how to customize your functions to tailor an app to your needs.
15-
16-
Hypermode scalable, secure infrastructure is globally distributed deliver your functions from data
17-
centers near your users and data for optimal performance.
18-
19-
During development, Hypermode provides tools to understand and debug your AI projects such as
20-
automatic preview and production environments, step-by-step inference analysis, and an in-browser
21-
API client to explore your project.
22-
23-
## Before you begin
24-
25-
To get started, create an account with Hypermode. You can select the plan that's right for you.
26-
27-
[Sign up](https://hypermode.com/sign-up) If you've never used Hypermode before, sign up for a new
28-
Hypermode account
29-
30-
[Log in](https://hypermode.com/sign-in) If you already have a Hypermode account, log in to get
31-
started
32-
33-
Once you create an account, you can choose to authenticate either with a Git provider or by using an
34-
email. When using email authentication, you may need to confirm both your email address and a phone
35-
number.
7+
In this quickstart we'll show you how to get set up with Hypermode and build an intelligent API that
8+
you can integrate into your app. You'll learn how to use the basic components of a Modus app and how
9+
to deploy it to Hypermode.
3610

3711
## Prerequisites
3812

39-
Before you start, make sure you have the following:
40-
41-
- [Github Account](https://github.com/join)
42-
- [Github CLI](https://cli.github.com/) installed
4313
- [Node.js](https://nodejs.org/en/download/package-manager) - v22 or higher
44-
- Text editor - We recommend [VS Code](https://code.visualstudio.com/)
45-
46-
### Install the Hyp CLI
47-
48-
While many of our instructions use the console, you can also use
49-
[Hyp CLI](https://hypermode-modus-docs.mintlify.app/hyp-cli#hyp-cli) to carry out most tasks on
50-
Hypermode.
51-
52-
<CodeGroup>
53-
54-
```bash cURL
55-
curl -sSL http://install.hypermode.com/hyp.sh | bash
56-
```
57-
58-
```js npm
59-
npm install -g @hypermode/hyp
60-
```
61-
62-
</CodeGroup>
63-
64-
You can get started with Hypermode’s
65-
[Instant Vector Search](https://github.com/hypermodeinc/hyper-commerce) template using either sample
66-
data or your own, without needing to write any code or set up a GitHub repository. This lets you
67-
explore the template and Hypermode’s features before committing any time and effort.
68-
69-
### Step 1: Clone the template
70-
71-
In your terminal, run the following command to create a repo called "HypermodeQuickstart" based on
72-
our base template:
73-
74-
```bash
75-
gh repo create hypermode-quickstart --template hypermodeinc/base-template
76-
```
77-
78-
Clone the repository to your local machine:
79-
80-
<Warning> Make sure to replace `your-username` with your GitHub username. </Warning>
81-
82-
```bash
83-
git clone https://github.com/your-username/hypermode-quickstart.git
84-
```
85-
86-
Navigate into the cloned repository:
87-
88-
```bash
89-
cd base-template
90-
```
91-
92-
### Step 2: Log in to Hypermode
93-
94-
```bash
95-
hyp login
96-
```
97-
98-
### Step 3: Deploy your project to Hypermode
99-
100-
When you execute the command `hyp deploy` in the project directory, Hypermode automatically triggers
101-
a build and deploy process, and you can monitor the progress in the console. If you had specified a
102-
custom model or collection in the manifest, Hypermode would automatically provision the
103-
infrastructure for you. However, in this case, the template uses a shared model and no collection.
104-
105-
```bash
106-
hyp deploy hypermode-quickstart
107-
```
108-
109-
### Step 4: Test your API
110-
111-
After deploying your project, go to your [Hypermode dashboard](https://hypermode.com/go). You can
112-
run a few sample queries in the web console to verify it's working as expected. In the following
113-
query, we're going to use the `generateText` function to generate text from the shared Meta Llama
114-
3.1 model based on the prompt "How are black holes created?"
115-
116-
```GraphQL
117-
query myPrompt {
118-
generateText(text:"How are black holes created?")
119-
}
120-
```
121-
122-
<Frame>
123-
<img
124-
className="block"
125-
src="/images/hyp-quickstart/graphiql-blackhole.png"
126-
alt="Hypermode's console showing results of query 'how are black holes created'."
127-
/>
128-
</Frame>
129-
130-
### Step 5: Inference history
131-
132-
Let's dig deeper into the behavior of our AI service by looking at the inference details in the
133-
inference tab. You can see the step-by-step inference process and the inputs and outputs of the
134-
function at each step. We can see in this case, it took Llama 4.4 seconds to reply to the prompt. We
135-
can also see the entirety the inputs (system, user, and parameters).
136-
137-
```json
138-
{
139-
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
140-
"messages": [
141-
{
142-
"role": "system",
143-
"content": "You are a helpful assistant. Limit your answers to 150 words."
144-
},
14+
- Text editor - we recommend [VS Code](https://code.visualstudio.com/)
15+
- Terminal - access Modus through a command-line interface (CLI)
16+
- [GitHub Account](https://github.com/join)
17+
18+
## Deploying your first Hypermode project
19+
20+
<Steps>
21+
<Step title="Create Modus app">
22+
We built Hypermode on top of Modus, an open source, serverless framework for crafting
23+
intelligent functions and APIs, powered by WebAssembly. With Hypermode, you can deploy,
24+
secure, and observe your Modus apps.
25+
26+
To get started, [create your first Modus app](/modus/quickstart). You can import this app into Hypermode in the next step.
27+
28+
</Step>
29+
<Step title="Import Modus app">
30+
You can import your Modus app via the Hypermode Console or through the terminal with Hyp CLI.
31+
<Tabs>
32+
<Tab title="Hypermode Console">
33+
Navigate to the [Hypermode Console](https://hypermode.com/go) and click **New Project**.
34+
When prompted, connect your GitHub account and select the repository you want to import.
35+
Once you've selected your repository, click "Import" to deploy your app.
36+
</Tab>
37+
<Tab title="Hyp CLI">
38+
Install the Hyp CLI via cURL or npm:
39+
<CodeGroup>
40+
```bash cURL
41+
curl -sSL http://install.hypermode.com/hyp.sh | bash
42+
```
43+
44+
```js npm
45+
npm install -g @hypermode/hyp
46+
```
47+
</CodeGroup>
48+
From the terminal, run the following command to import your Modus app into Hypermode. This command will create your Hypermode project and deploy your app.
49+
50+
```bash
51+
hyp init
52+
```
53+
</Tab>
54+
</Tabs>
55+
56+
When Hypermode creates your project, a runtime is initiated for your app as well as connections to any [Hypermode-hosted models](/hosted-models).
57+
58+
</Step>
59+
<Step title="Explore API endpoint">
60+
After deploying your app, Hypermode lands you in your project home. You can see the status of your project
61+
and the API endpoint generated for your app.
62+
63+
From the **Query** page, you can run a sample query to verify it's working as expected. In the following
64+
query, we're going to use the `generateText` function to generate text from the shared Meta Llama
65+
3.1 model based on the prompt "How are black holes created?"
66+
67+
```GraphQL
68+
query myPrompt {
69+
generateText(text:"How are black holes created?")
70+
}
71+
```
72+
73+
<img
74+
className="block"
75+
src="/images/hyp-quickstart/graphiql-blackhole.png"
76+
alt="Hypermode's console showing results of query 'how are black holes created'."
77+
/>
78+
79+
</Step>
80+
<Step title="Observe function execution">
81+
Let's dig deeper into the behavior of our AI service when we ran the query by looking at the
82+
**Inferences** page. You can see the step-by-step inference process and the inputs and outputs of the
83+
model at each step of your function. We can see in this case, it took Llama 4.4 seconds to reply to the prompt. We
84+
can also see the parameters on both the inputs and outputs.
85+
86+
```json
14587
{
146-
"role": "user",
147-
"content": "How are black holes created?"
88+
"model": "meta-llama/Meta-Llama-3.1-8B-Instruct",
89+
"messages": [
90+
{
91+
"role": "system",
92+
"content": "You are a helpful assistant. Limit your answers to 150 words."
93+
},
94+
{
95+
"role": "user",
96+
"content": "How are black holes created?"
97+
}
98+
],
99+
"max_tokens": 200,
100+
"temperature": 0.7
148101
}
149-
],
150-
"max_tokens": 200,
151-
"temperature": 0.7
152-
}
153-
```
154-
155-
<Frame>
156-
<img
157-
className="block"
158-
src="/images/hyp-quickstart/inference-history.png"
159-
alt="Hypermode's console showing the inputs and outputs of the last model inference."
160-
/>
161-
</Frame>
162-
163-
### Step 6: Customizing your AI API
164-
165-
Let's make a few changes to app to explore how easy customizing your AI services is
166-
167-
#### Update our function
168-
169-
Our AI Service is responding using too formal of language. Let's update our `generateText` function
170-
to respond using exclusively surfing analogies.
102+
```
171103

172-
1. Go to the `index.ts` file and locate the `generateText` function.
173-
2. Modify the `generateText` to only respond like a surfer, like this:
104+
<img
105+
className="block"
106+
src="/images/hyp-quickstart/inference-history.png"
107+
alt="Hypermode's console showing the inputs and outputs of the last model inference."
108+
/>
174109

175-
```typescript AssemblyScript
176-
export function generateText(text: string): string {
177-
const model = models.getModel<OpenAIChatModel>("text-generator")
110+
</Step>
111+
<Step title="Customize your function">
112+
Hypermode makes it simple to iterate quickly. Let's make a few changes to your app
113+
to explore how easy customizing your API is.
178114

179-
const input = model.createInput([
180-
new SystemMessage(
181-
"You are a helpful assistant. Only respond using surfing analogies and metaphors.",
182-
),
183-
new UserMessage(text),
184-
])
115+
Our API is responding using language that is more formal than we want. Let's update our
116+
`generateText` function to respond using exclusively surfing analogies.
185117

186-
const output = model.invoke(input)
118+
Go to the `index.ts` file and locate the `generateText` function. Modify the function
119+
to only respond like a surfer, like this:
187120

188-
return output.choices[0].message.content.trim()
189-
}
190-
```
121+
```ts AssemblyScript
122+
export function generateText(text: string): string {
123+
const model = models.getModel<OpenAIChatModel>("text-generator")
191124

192-
3. Save the file and push an update to your git repo
125+
const input = model.createInput([
126+
new SystemMessage(
127+
"You are a helpful assistant. Only respond using surfing analogies and metaphors.",
128+
),
129+
new UserMessage(text),
130+
])
193131

194-
Add the we modified to the git staging area:
132+
const output = model.invoke(input)
195133

196-
```bash
197-
git add index.ts
198-
```
199-
200-
Commit your changes:
201-
202-
```bash
203-
git commit -m "Update generateText function to use surfing analogies"
204-
```
205-
206-
Finally, push the changes to your GitHub repository:
207-
208-
```bash
209-
git push origin main
210-
```
134+
return output.choices[0].message.content.trim()
135+
}
136+
```
211137

212-
4. Test your changes Hypermode automatically redeploys whenever you push an update to your git
213-
repository. Go back to the Hypermode dashboard and run the same query as before. You should see
214-
the response now uses surfing analogies!
138+
Save the file and push an update to your git repo. Hypermode automatically redeploys
139+
whenever you push an update to the target branch in your git repo. Go back to the Hypermode Console
140+
and run the same query as before. You should see the response now uses surfing analogies!
215141

216-
{" "}
142+
<img
143+
className="block"
144+
src="/images/hyp-quickstart/graphiql-surfing.png"
145+
alt="Hypermode's console showing results of new query."
146+
/>
217147

218-
<Frame>
219-
<img
220-
className="block"
221-
src="/images/hyp-quickstart/graphiql-surfing.png"
222-
alt="Hypermode's console showing results of new query."
223-
/>
224-
</Frame>
148+
</Step>
149+
</Steps>
225150

226151
## Next steps
227152

228153
Hypermode and Modus provide a powerful platform for building and hosting AI models, data, and logic.
229154
You now know the basics of Hypermode. There's no limit to what you can build.
230155

231-
Try chaining together multiple functions to create more complex applications or swapping out Llama
232-
3.1 in the manifest (`modus.json`) for a model you see on HuggingFace, like
233-
[distilbert](https://huggingface.co/distilbert/distilgpt2).
234-
235-
You can also explore the
236-
[Modus SDK](https://hypermode-modus-docs.mintlify.app/modus/sdk/models#import-from-the-sdk) to
237-
connect to external models securely.
156+
And when you're ready to [integrate Hypermode into your app](/integrate-api), that's as simple as
157+
calling a GraphQL endpoint.

0 commit comments

Comments
 (0)