Skip to content

Commit 22cca61

Browse files
Merge branch 'main' of github.com:eth-sri/lmql
2 parents 554a382 + 51c92b0 commit 22cca61

File tree

11 files changed

+24
-24
lines changed

11 files changed

+24
-24
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ project.
2020

2121
**Running Test Suites** The repository contains a number of test suites in the `src/lmql/tests/` directory. To run all
2222
tests simply run `python src/lmql/tests/all.py`. Note that for some tests you need to configure an
23-
OpenAI API key according to the instructions in [documentation](https://docs.lmql.ai/en/stable/language/openai.html).
23+
OpenAI API key according to the instructions in [documentation](https://lmql.ai/docs/models/openai.html).
2424
We are working to remove the external dependency on the OpenAI API, but for now it is still required
2525
for some tests. If you cannot get an API key, you can ask one of the core maintainers to run the
2626
tests for your, once your pull request is ready.

README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
<p align="center">
99
A programming language for large language models.
1010
<br />
11-
<a href="https://docs.lmql.ai"><strong>Documentation »</strong></a>
11+
<a href="https://lmql.ai/docs"><strong>Documentation »</strong></a>
1212
<br />
1313
<br />
1414
<a href="https://lmql.ai">Explore Examples</a>
@@ -52,25 +52,25 @@ Program Output:
5252
LMQL allows you to express programs that contain both, traditional algorithmic logic, and LLM calls.
5353
At any point during execution, you can prompt an LLM on program variables in combination with standard natural language prompting, to leverage model reasoning capabilities in the context of your program.
5454

55-
To better control LLM behavior, you can use the `where` keyword to specify constraints and data types of the generated text. This enables guidance of the model's reasoning process, and constraining of intermediate outputs using an [expressive constraint language](https://docs.lmql.ai/en/stable/language/constraints.html).
55+
To better control LLM behavior, you can use the `where` keyword to specify constraints and data types of the generated text. This enables guidance of the model's reasoning process, and constraining of intermediate outputs using an [expressive constraint language](https://lmql.ai/docs/language/constraints.html).
5656

57-
Beyond this linear form of scripting, LMQL also supports a number of decoding algorithms to execute your program, such as `argmax`, `sample` or even advanced branching decoders like [beam search and `best_k`](https://docs.lmql.ai/en/stable/language/decoders.html).
57+
Beyond this linear form of scripting, LMQL also supports a number of decoding algorithms to execute your program, such as `argmax`, `sample` or even advanced branching decoders like [beam search and `best_k`](https://lmql.ai/docs/language/decoding.html).
5858

59-
Learn more about LMQL by exploring thne **[Example Showcase](https://lmql.ai)**, by running your own programs in our **[browser-based Playground IDE](https://lmql.ai/playground)** or by reading the **[documentation](https://docs.lmql.ai)**.
59+
Learn more about LMQL by exploring thne **[Example Showcase](https://lmql.ai)**, by running your own programs in our **[browser-based Playground IDE](https://lmql.ai/playground)** or by reading the **[documentation](https://lmql.ai/docs)**.
6060

6161
## Feature Overview
6262

6363
LMQL is designed to make working with language models like OpenAI and 🤗 Transformers more efficient and powerful through its advanced functionality, including multi-variable templates, conditional distributions, constraints, datatypes and control flow.
6464

65-
- [X] **Python Syntax**: Write your queries using [familiar Python syntax](https://docs.lmql.ai/en/stable/language/overview.html), fully integrated with your Python environment (classes, variable captures, etc.)
66-
- [X] **Rich Control-Flow**: LMQL offers full Python support, enabling powerful [control flow and logic](https://docs.lmql.ai/en/stable/language/scripted_prompts.html) in your prompting logic.
67-
- [X] **Advanced Decoding**: Take advantage of advanced decoding techniques like [beam search, best_k, and more](https://docs.lmql.ai/en/stable/language/decoders.html).
68-
- [X] **Powerful Constraints Via Logit Masking**: Apply [constraints to model output](https://docs.lmql.ai/en/stable/language/constraints.html), e.g. to specify token length, character-level constraints, datatype and stopping phrases to get more control of model behavior.
65+
- [X] **Python Syntax**: Write your queries using [familiar Python syntax](https://lmql.ai/docs/language/overview.html), fully integrated with your Python environment (classes, variable captures, etc.)
66+
- [X] **Rich Control-Flow**: LMQL offers full Python support, enabling powerful [control flow and logic](https://lmql.ai/docs/language/scripted-prompting.html) in your prompting logic.
67+
- [X] **Advanced Decoding**: Take advantage of advanced decoding techniques like [beam search, best_k, and more](https://lmql.ai/docs/language/decoding.html).
68+
- [X] **Powerful Constraints Via Logit Masking**: Apply [constraints to model output](https://lmql.ai/docs/language/constraints.html), e.g. to specify token length, character-level constraints, datatype and stopping phrases to get more control of model behavior.
6969
- [X] **Optimizing Runtime:** LMQL leverages speculative execution to enable faster inference, constraint short-circuiting, more efficient token use and [tree-based caching](https://lmql.ai/blog/release-0.0.6.html).
70-
- [X] **Sync and Async API**: Execute hundreds of queries in parallel with LMQL's [asynchronous API](https://docs.lmql.ai/en/stable/python/python.html), which enables cross-query batching.
71-
- [X] **Multi-Model Support**: Seamlessly use LMQL with [OpenAI API, Azure OpenAI, and 🤗 Transformers models](https://docs.lmql.ai/en/stable/language/models.html).
70+
- [X] **Sync and Async API**: Execute hundreds of queries in parallel with LMQL's [asynchronous API](https://lmql.ai/docs/lib/python.html), which enables cross-query batching.
71+
- [X] **Multi-Model Support**: Seamlessly use LMQL with [OpenAI API, Azure OpenAI, and 🤗 Transformers models](https://lmql.ai/docs/models/).
7272
- [X] **Extensive Applications**: Use LMQL to implement advanced applications like [schema-safe JSON decoding](https://github.com/microsoft/guidance#guaranteeing-valid-syntax-json-example-notebook), [algorithmic prompting](https://twitter.com/lbeurerkellner/status/1648076868807950337), [interactive chat interfaces](https://twitter.com/lmqllang/status/1645776209702182917), and [inline tool use](https://lmql.ai/#kv).
73-
- [X] **Library Integration**: Easily employ LMQL in your existing stack leveraging [LangChain](https://docs.lmql.ai/en/stable/python/langchain.html) or [LlamaIndex](https://docs.lmql.ai/en/stable/python/llama_index.html).
73+
- [X] **Library Integration**: Easily employ LMQL in your existing stack leveraging [LangChain](https://lmql.ai/docs/lib/integrations/langchain.html) or [LlamaIndex](https://lmql.ai/docs/lib/integrations/llama_index.html).
7474
- [X] **Flexible Tooling**: Enjoy an interactive development experience with [LMQL's Interactive Playground IDE](https://lmql.ai/playground), and [Visual Studio Code Extension](https://marketplace.visualstudio.com/items?itemName=lmql-team.lmql).
7575
- [X] **Output Streaming**: Stream model output easily via [WebSocket, REST endpoint, or Server-Sent Event streaming](https://github.com/eth-sri/lmql/blob/main/src/lmql/output/).
7676

docs/.vitepress/link-checker.js

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -77,8 +77,8 @@ markdownFiles.forEach(file => {
7777
// ignore internal links
7878
return
7979
}
80-
if (url.startsWith("https://docs.lmql.ai")) {
81-
console.log(`File ${file}:${lineno} contains direct link to docs.lmql.ai: ${url}`)
80+
if (url.startsWith("https://lmql.ai/docs")) {
81+
console.log(`File ${file}:${lineno} contains direct link to lmql.ai/docs: ${url}`)
8282
return
8383
}
8484

docs/blog/posts/release-0.0.6.4.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Among many things, this update contains several bug fixes and improvements. The
1616
To learn more about the internals of the new streaming protocol, i.e. the language model transport protocol (LMTP), you can find more details in [this README file](https://github.com/eth-sri/lmql/blob/main/src/lmql/models/lmtp/README.md). In the future, we intend to implement more model backends using LMTP, streamlining communication between LMQL and models.
1717

1818
<div style="text-align:center">
19-
<img src="https://docs.lmql.ai/en/stable/_images/inference.svg" width="80%" />
19+
<img src="https://lmql.ai/assets/inference.f47a6f3e.svg" width="80%" />
2020
<br>
2121
<i>LMQL's new streaming protocol (LMTP) allows for faster local model inference.</i>
2222
</div>

scripts/flake.d/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ authors = [
88
"Martin Vechev",
99
]
1010
homepage = "https://lmql.ai"
11-
documentation = "https://docs.lmql.ai"
11+
documentation = "https://lmql.ai/docs"
1212
maintainers = ["The LMQL Team <hello@lmql.ai>"]
1313
license = "Apache-2.0"
1414
readme = "README.md"

setup.cfg

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ long_description = file: README.md
88
long_description_content_type = text/markdown
99
url = https://lmql.ai
1010
project_urls =
11-
Docs = https://docs.lmql.ai
11+
Docs = https://lmql.ai/docs
1212
classifiers =
1313
Programming Language :: Python :: 3
1414
Operating System :: OS Independent

src/lmql/models/lmtp/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ It relies on the idea of separating model loading and inference into a separate
66

77
![Architecture](../../../../docs/source/images/inference.svg)
88

9-
Read more about using LMTP in LMQL, in the [LMQL documentation](https://docs.lmql.ai/en/latest/language/hf.html).
9+
Read more about using LMTP in LMQL, in the [LMQL documentation](https://lmql.ai/docs/models/hf.html).
1010

1111
## Communication Channels
1212

src/lmql/runtime/openai_secret.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
openai-org: <your openai org>
1919
2020
For more info, check the related project docs:
21-
https://docs.lmql.ai/en/stable/language/openai.html#configuring-openai-api-credentials
21+
https://lmql.ai/docs/models/openai.html#configuring-openai-api-credentials
2222
"""
2323

2424

src/lmql/ui/playground/src/App.jsx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -616,7 +616,7 @@ function ModelSelection(props) {
616616
<span className="instructions">
617617
<b>Custom Model</b><br/>
618618
Specify the model to execute your query with. You can also type in the text field above. <i>This setting will override any model specified by the query.</i>
619-
{configuration.BROWSER_MODE ? <><br/><a href={"https://docs.lmql.ai/en/latest/quickstart.html"} target="_blank" rel="noreferrer" className="hidden-on-small">
619+
{configuration.BROWSER_MODE ? <><br/><a href={"https://lmql.ai/docs/"} target="_blank" rel="noreferrer" className="hidden-on-small">
620620
Install LMQL locally </a> to use other models, e.g. from 🤗 Tranformers</>
621621
: null}
622622
</span>
@@ -2864,7 +2864,7 @@ class App extends React.Component {
28642864
{!configuration.NEXT_MODE && <>Explore LMQL</>}
28652865
{configuration.NEXT_MODE && <>Explore New Features</>}
28662866
</FancyButton>}
2867-
{window.location.hostname.includes("lmql.ai") && <a href={"https://docs.lmql.ai/en/latest/quickstart.html"} target="_blank" rel="noreferrer" className="hidden-on-small">
2867+
{window.location.hostname.includes("lmql.ai") && <a href={"https://lmql.ai/docs/"} target="_blank" rel="noreferrer" className="hidden-on-small">
28682868
Install LMQL Locally </a>}
28692869
<Spacer />
28702870
{/* show tooltip with build time */}
@@ -2892,7 +2892,7 @@ class App extends React.Component {
28922892
<a href="https://github.com/eth-sri/lmql" disabled target="_blank" rel="noreferrer"><BsGithub/>LMQL on Github</a>
28932893
</li>
28942894
<li>
2895-
<a href="https://docs.lmql.ai" disabled target="_blank" rel="noreferrer"><BsBook/>Documentation</a>
2895+
<a href="https://lmql.ai/docs" disabled target="_blank" rel="noreferrer"><BsBook/>Documentation</a>
28962896
</li>
28972897
<span>
28982898
LMQL {this.state.buildInfo.commit}

src/lmql/ui/playground/src/Explore.jsx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -489,7 +489,7 @@ export function Explore() {
489489
let description = <>
490490
LMQL is a query language for large language models. Explore the examples below to get started.
491491
<LinkBox>
492-
<a className="button" target="_blank" rel="noreferrer" href="https://docs.lmql.ai">Documentation</a>
492+
<a className="button" target="_blank" rel="noreferrer" href="https://lmql.ai/docs">Documentation</a>
493493
<a className="button" target="_blank" rel="noreferrer" href="https://lmql.ai">Overview</a>
494494
</LinkBox>
495495
</>

0 commit comments

Comments
 (0)