Skip to content

Commit 937455e

Browse files
authored
Merge branch 'main' into Viggy-117-patch-1
2 parents bbb29be + 20065d6 commit 937455e

File tree

22 files changed

+399
-108
lines changed

22 files changed

+399
-108
lines changed

.github/CODEOWNERS

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
* @dabit3 @shrimalmadhur @antojoseph @NimaVaziri @scotthconner @MC1823315 @mmurrs @non-fungible-nelson @MadelineAu @afkbyte
2-
/docs/products/eigenda @samlaf @dabit3 @MadelineAu @mmurrs
2+
/docs/products/eigenda @samlaf @dabit3 @MadelineAu @mmurrs
3+
/docs/products/eigencompute @shrimalmadhur @gpsanant @solimander @Chris-Moller @Viggy-117

docs/products/eigenlayer/developers/reference/ai-resources.mdx renamed to docs/get-started/ai-resources.mdx

Lines changed: 16 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,10 @@
11
---
2-
sidebar_position: 3
2+
sidebar_position: 4
33
title: AI Resources
44
---
55

66
import CopyButton from '@site/src/components/CopyToClipboard';
77

8-
98
These text and markdown files contain documentation and code optimized for use with LLMs and AI tools.
109

1110
<table style={{ width: '100%', borderCollapse: 'collapse' }}>
@@ -18,8 +17,8 @@ These text and markdown files contain documentation and code optimized for use w
1817
</thead>
1918
<tbody>
2019
<CopyButton
21-
title="llms.md"
22-
filePath="/llms.md"
20+
title="llms.txt"
21+
filePath="/llms.txt"
2322
description="Navigation index of all EigenLayer documentation pages."
2423
/>
2524
<CopyButton
@@ -33,9 +32,19 @@ These text and markdown files contain documentation and code optimized for use w
3332
description="AVS Developers documentation."
3433
/>
3534
<CopyButton
36-
title="eigenda-docs.md"
37-
filePath="/eigenda-docs.md"
38-
description="EigenDA documentation."
35+
title="eigenx.md"
36+
filePath="/eigenx.md"
37+
description="EigenX CLI."
38+
/>
39+
<CopyButton
40+
title="devkit.md"
41+
filePath="/devkit.md"
42+
description="DevKit CLI."
43+
/>
44+
<CopyButton
45+
title="eigenlayer-contracts.md"
46+
filePath="/eigenlayer-contracts.md"
47+
description="Complete EigenLayer contracts."
3948
/>
4049
<CopyButton
4150
title="operators-developer-docs.md"
@@ -47,11 +56,6 @@ These text and markdown files contain documentation and code optimized for use w
4756
filePath="/eigenlayer-contracts.md"
4857
description="Complete EigenLayer contracts."
4958
/>
50-
<CopyButton
51-
title="hello-world-avs.md"
52-
filePath="/hello-world-avs.md"
53-
description="Complete Hello World AVS."
54-
/>
5559
<CopyButton
5660
title="eigenlayer-go-sdk.md"
5761
filePath="/eigenlayer-go-sdk.md"

docs/get-started/developers/concepts/build-faster-hourglass-devkit.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,8 @@ title: Build Faster with DevKit and Hourglass
33
sidebar_position: 2
44
---
55

6+
> Click [here](https://github.com/Layr-Labs/devkit-cli) get started building with DevKit.
7+
68
The task-based framework Hourglass provides the onchain and offchain infrastructure to run task-based AVSs.
79
Hourglass enables you to extend your smart contracts by using an offchain coprocessor AVS.
810

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
{
2-
"position": 3,
2+
"position": 1,
33
"label": "EigenCloud"
44
}

docs/get-started/eigencloud/eigencloud-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ economic incentives. Services are fragmented, tooling is underpowered, and integ
1414
builders who want to create high-performance, trust-minimized systems, this complexity is a blocker.
1515

1616
EigenCloud is our answer. It reimagines the developer experience around [EigenLayer](../../products/eigenlayer/concepts/eigenlayer-overview.md), bundling together a suite of 1st party
17-
verifiable services, such as [EigenDA](../../products/eigenda/core-concepts/overview.md), [EigenCompute](../../products/eigencompute/eigencompute-overview.md), and [EigenVerify](../../products/eigenverify/eigenverify-overview.md) with new powerful developer tooling. This includes a
17+
verifiable services, such as [EigenDA](../../products/eigenda/core-concepts/overview.md), [EigenCompute](../../products/eigencompute/eigencompute-overview.md), [EigenAI](../../products/eigenai/eigenai-overview.md), and [EigenVerify](../../products/eigenverify/eigenverify-overview.md) with new powerful developer tooling. This includes a
1818
new CLI called DevKit for AVS and App developers, composable middleware and orchestration tools, unified billing and economic
1919
incentives, and best-in-class onboarding and monitoring capabilities.These capabilities empower developers to go from idea to
2020
deployment in days rather than months without needing to understand EigenLayer’s internals, enabling mainstream adoption of

docs/get-started/eigencloud/eigencloud-roadmap.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,11 +31,12 @@ AVSs and apps without needing deep protocol expertise. Work includes:
3131
### Eigen Primitives: Verifiable Blockspace
3232

3333
To make verifiability easy to adopt, EigenCloud provides foundational, 1st party services that AVSs and applications can plug into.
34-
These primitives, [EigenDA](../../products/eigenda/core-concepts/overview.md), [EigenCompute](../../products/eigencompute/eigencompute-overview.md), and [EigenVerify](../../products/eigenverify/eigenverify-overview.md), deliver critical capabilities, such as data availability, offchain
34+
These primitives, [EigenDA](../../products/eigenda/core-concepts/overview.md), [EigenCompute](../../products/eigencompute/eigencompute-overview.md), [EigenAI](../../products/eigenai/eigenai-overview.md), and [EigenVerify](../../products/eigenverify/eigenverify-overview.md), deliver critical capabilities, such as data availability, offchain
3535
compute, and adjudication. By making these services developer-accessible, we enable new classes of verifiable applications,
3636
reduce developer lift, and promote ecosystem standardization. Work includes:
3737
* EigenDA Throughput: Scale EigenDA throughput from 50 mb/s to hundreds of mb/s
3838
* EigenDA Latency: Reduce EigenDA latency from 10s to less than a second
39+
* Preview release of EigenAI for a seamless, OpenAI-compatible verifiable inference API
3940
* Preview release of EigenCompute for containerized, verifiable compute
4041
* Preview release of EigenVerify for fraud and dispute resolution
4142
* Finality gadgets, sequencing layers, and modular coordination for rollups
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
{
2-
"position": 2,
2+
"position": 3,
33
"label": "Operators"
44
}
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
{
2+
"position": 6,
3+
"label": "EigenAI"
4+
}
Lines changed: 244 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,244 @@
1+
---
2+
title: EigenAI Overview
3+
sidebar_position: 5
4+
---
5+
6+
## Overview
7+
Build verifiable applications leveraging LLM inference without wondering if the same LLM call might produce different results on different runs, or whether your prompts, models, or responses are modified in any way. EigenAI offers a *deterministic, verifiable* API, compatible with the OpenAI API, so you can simply place the API endpoint in your existing application to start shipping AI-based applications you and your users can trust.
8+
9+
**AI is one of the greatest technological advancements in human history. Our mission is to enable any developer to build with scalable & verifiable AI, so that any developer anywhere in the world can build trusted applications.**
10+
11+
## Use Cases
12+
13+
Builders are leveraging EigenAI to build applications such as:
14+
- **Prediction Market Agents**: Build agents who can interpret real world events, news, etc and place bets or dispute market settlements.
15+
- **Trading Agents**: Build agents who can reason through financial data with consistent quality of thinking (no need to worry if models are quantized or not in production) while you ensure they process all of the information they're given (unmodified prompts) and that agents actually use the unmodified responses. You can also ensure they reliably make the same trading decision if prompted about the same data multiple times (via EigenAI's determinism).
16+
- **Verifiable AI Games**: Build games with AI characters or AI governance, where you can prove to your users that their interactions with the AI aren't being gamed.
17+
- **Verifiable AI Judges**: Whether it's contests / games, admissions committees, or prediction market settlements, AI can be used to verifiably judge entries / submissions.
18+
19+
<img src="/img/eigenai-use-cases.jpg" alt="EigenAI Use Cases"/>
20+
21+
## Get started
22+
23+
A few key points:
24+
25+
- By OpenAI compliancy we specifically mean the messages-based Chat Completions API: https://platform.openai.com/docs/api-reference/chat/create
26+
- By “deterministic” we specifically mean that one request (prompt, parameters, etc) provided to the API multiple times will produce the **same output bit-by-bit**, compared to the potentially varying responses one would typically get if calling an OpenAI, Anthropic, etc endpoint as they do not guarantee deterministic behavior. We will be releasing more details shortly on how EigenAI achieves this across the stack.
27+
- On wanting non-determinism:
28+
- You can still introduce non-determinism in your application if you want. By setting a different seed for requests but otherwise keeping the request the same, the API will produce a different output.
29+
- On verification: As part of EigenAI’s mainnet alpha release, the code will be open sourced shortly after. Anyone with access to commodity GPUs will be able to leverage the determinism of EigenAI's software to re-execute any requests and verify the responses given to them by EigenAI.
30+
- As we go towards general availability, we will stand up another API that can be used for this verification flow.
31+
32+
## Usage
33+
34+
We're starting off with supporting the `gpt-oss-120b-f16` model based on initial demand and expanding from there. To get started or request another model, visit our [onboarding page](https://onboarding.eigencloud.xyz/).
35+
36+
```bash
37+
$ curl -X POST https://eigenai.eigencloud.xyz/v1/chat/completions \
38+
-H "X-API-Key: <api-key>" \
39+
-H "Content-Type: application/json" \
40+
-d '{
41+
"model": "gpt-oss-120b-f16",
42+
"max_tokens": 120,
43+
"seed": 42,
44+
"messages": [{"role": "user", "content": "Write a story about programming"}]
45+
}' | jq
46+
47+
{
48+
"id": "chatcmpl-eigenai-llama-1758835890305965",
49+
"created": 1758835890,
50+
"model": "gpt-oss-120b-f16",
51+
"system_fingerprint": "eigenai-llama@0.1.0+eigenai-llama",
52+
"object": "chat.completion",
53+
"usage": {
54+
"completion_tokens": 120,
55+
"prompt_tokens": 72,
56+
"total_tokens": 192
57+
},
58+
"choices": [
59+
{
60+
"index": 0,
61+
"message": {
62+
"role": "assistant",
63+
"content": "<|channel|>analysis<|message|>The user asks: \"Write a story about programming\". Need to produce a story. Probably creative, about programming, characters, maybe AI, debugging, etc. Should be engaging. No constraints other than story. I will craft a narrative with a protagonist, their journey, maybe metaphorical. Use vivid language. Provide a story. Ensure it's appropriate. Probably around 800-1200 words? Not too long, but enough. I'll deliver.<|end|><|start|>assistant<|channel|>final<|message|>**The Compiler’s Apprentice**\n\nWhen Maya first walked into the cramped, fluorescent‑lit office of LumenTech"
64+
},
65+
"finish_reason": "length"
66+
}
67+
],
68+
"signature": "2ee2e48ae9f6796c1de67977981d937c886efec3957c9bfc28f7a3f3be8262f04af609f22878b3bc2f0c4858fd04005d76877208881abd473208317df1dcfaf11c"
69+
}
70+
```
71+
72+
### OpenAI Client usage
73+
74+
```python
75+
from openai import OpenAI
76+
77+
client = OpenAI(
78+
base_url="https://eigenai.eigencloud.xyz/v1",
79+
default_headers={"x-api-key": api_key}
80+
)
81+
82+
tools: List[Dict[str, Any]] = [
83+
{
84+
"type": "function",
85+
"function": {
86+
"name": "get_current_weather",
87+
"description": "Get the current weather in a given location",
88+
"parameters": {
89+
"type": "object",
90+
"properties": {
91+
"location": {
92+
"type": "string",
93+
"description": "The city and state, e.g. San Francisco, CA",
94+
},
95+
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
96+
},
97+
"required": ["location"],
98+
},
99+
},
100+
}
101+
]
102+
103+
step1 = client.chat.completions.create(
104+
model=model,
105+
messages=[{"role": "user", "content": "What is the weather like in Boston today?"}],
106+
tools=tools,
107+
tool_choice="auto",
108+
)
109+
110+
"""
111+
Response:
112+
{
113+
"id": "chatcmpl-eigenai-llama-1758836092182536",
114+
"object": "chat.completion",
115+
"created": 1758727565,
116+
"model": "gpt-oss-120b-f16",
117+
"choices": [
118+
{
119+
"index": 0,
120+
"message": {
121+
"role": "assistant",
122+
"content": null,
123+
"tool_calls": [
124+
{
125+
"id": "call_YDzzMHFtp1yuURbiPe09uyHt",
126+
"type": "function",
127+
"function": {
128+
"name": "get_current_weather",
129+
"arguments": "{\"location\":\"Boston, MA\",\"unit\":\"fahrenheit\"}"
130+
}
131+
}
132+
],
133+
"refusal": null,
134+
"annotations": []
135+
},
136+
"finish_reason": "tool_calls"
137+
}
138+
],
139+
"usage": {
140+
"prompt_tokens": 81,
141+
"completion_tokens": 223,
142+
"total_tokens": 304,
143+
"prompt_tokens_details": {
144+
"cached_tokens": 0,
145+
"audio_tokens": 0
146+
},
147+
"completion_tokens_details": {
148+
"reasoning_tokens": 192,
149+
"audio_tokens": 0,
150+
"accepted_prediction_tokens": 0,
151+
"rejected_prediction_tokens": 0
152+
}
153+
},
154+
"""
155+
156+
messages_step2: List[Dict[str, Any]] = [
157+
{"role": "user", "content": "What is the weather like in Boston today?"},
158+
{
159+
"role": "assistant",
160+
"content": None,
161+
"tool_calls": [
162+
{
163+
"id": tool_call_id,
164+
"type": "function",
165+
"function": {
166+
"name": "get_current_weather",
167+
"arguments": json.dumps({"location": "Boston, MA", "unit": "fahrenheit"}),
168+
},
169+
}
170+
],
171+
},
172+
{"role": "tool", "tool_call_id": tool_call_id, "content": "58 degrees"},
173+
{"role": "user", "content": "Do I need a sweater for this weather?"},
174+
]
175+
176+
step2 = client.chat.completions.create(model=model, messages=messages_step2)
177+
178+
"""
179+
Response
180+
{
181+
"id": "chatcmpl-eigenai-llama-CJOZTszzusoHvAYYrW8PT5lv6vzKo",
182+
"object": "chat.completion",
183+
"created": 1758738719,
184+
"model": "gpt-oss-120b-f16",
185+
"choices": [
186+
{
187+
"index": 0,
188+
"message": {
189+
"role": "assistant",
190+
"content": "At around 58°F in Boston you’ll feel a noticeable chill—especially if there’s any breeze or you’re out in the morning or evening. I’d recommend throwing on a light sweater or layering a long-sleeve shirt under a casual jacket. If you tend to run cold, go with a medium-weight knit; if you’re just mildly sensitive, a thin cardigan or pullover should be enough.",
191+
"refusal": null,
192+
"annotations": []
193+
},
194+
"finish_reason": "stop"
195+
}
196+
],
197+
"usage": {
198+
"prompt_tokens": 67,
199+
"completion_tokens": 294,
200+
"total_tokens": 361,
201+
"prompt_tokens_details": {
202+
"cached_tokens": 0,
203+
"audio_tokens": 0
204+
},
205+
"completion_tokens_details": {
206+
"reasoning_tokens": 192,
207+
"audio_tokens": 0,
208+
"accepted_prediction_tokens": 0,
209+
"rejected_prediction_tokens": 0
210+
}
211+
},
212+
"""
213+
```
214+
215+
## Supported parameters
216+
217+
This list will be expanding to cover the full parameter set of the Chat Completions API.
218+
219+
- `messages: array`
220+
- A list of messages comprising the conversation so far
221+
- `model: string`
222+
- Model ID used to generate the response, like `gpt-oss-120b-f16`
223+
- `max_tokens: (optional) integer`
224+
- The maximum number of [tokens](https://platform.openai.com/tokenizer) that can be generated in the chat completion. This value can be used to control [costs](https://openai.com/api/pricing/) for text generated via API.
225+
- `seed: (optional) integer`
226+
- If specified, our system will run the inference deterministically, such that repeated requests with the same `seed` and parameters should return the same result.
227+
- `stream: (optional) bool`
228+
- If set to true, the model response data will be streamed to the client as it is generated using Server-Side Events (SSE).
229+
- `temperature: (optional) number`
230+
- What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
231+
- `top_p: (optional) number`
232+
- An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
233+
- `logprobs: (optional) bool`
234+
- Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`
235+
- `frequency_penalty: (optional) number`
236+
- Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
237+
- `presence_penalty: (optional) number`
238+
- Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
239+
- `tools: array`
240+
- A list of tools ([function tools](https://platform.openai.com/docs/guides/function-calling)) the model may call.
241+
- `tool_choice: (optional) string`
242+
- “auto”, “required”, “none”
243+
- Controls which (if any) tool is called by the model. `none` means the model will not call any tool and instead generates a message. `auto` means the model can pick between generating a message or calling one or more tools. `required` means the model must call one or more tools. Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool.
244+
- `none` is the default when no tools are present. `auto` is the default if tools are present.

docs/products/eigencompute/eigencompute-overview.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,10 @@ sidebar_position: 4
44
---
55

66
:::tip Get Started
7+
78
Follow the [EigenX CLI documentation](https://github.com/Layr-Labs/eigenx-cli/blob/main/README.md) to deploy your application to EigenCompute.
9+
10+
While in Alpha **an allowlisted account is required to create apps.** Use an existing address with `eigenx auth login` or generate a new address with `eigenx auth generate`, then submit an onboarding request [here](https://onboarding.eigencloud.xyz/?utm_source=docs&utm_content=eigencompute_overview).
811
:::
912

1013
## Overview

0 commit comments

Comments
 (0)