Skip to content

Commit 9d9a4fb

Browse files
authored
Simplify generated code module structure (#138)
Remove the `generated` directory and the various re-exports. - namespace docs have been extracted from the respective `mod.rs` to `.md` files in api_generator/docs, and are inserted in the generated source file. - namespace sources are generated directly in the `src` directory - the generator can now insert generated sections in regular source files based on `GENERATE-BEGIN` and `GENERATE-END` markers. This is used to generate the namespace module list in the main `lib.rs` and merge generated with manual code in `params.rs` - generated and merged source files are listed in `src/.generated.toml` so that the generator can cleanup on start. Before considering merging generated code with special markers we experimented with the `include!()` macro, but this didn't work well: - rustfmt will not reformat include files - `mod` statement look for files relative to the source file, so include files have to be located in the same directory as their includer, while locating them in a dedicated directory would have provided a nicer organization. Merging generated sections provides a more natural source code layout.
1 parent ecbfb80 commit 9d9a4fb

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

86 files changed

+9299
-9225
lines changed

api_generator/Cargo.toml

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ globset = "~0.4"
1616
Inflector = "0.11.4"
1717
indicatif = "0.12.0"
1818
lazy_static = "1.4.0"
19+
path-slash = "0.1.3"
1920
quote = "~0.3"
2021
reduce = "0.1.2"
2122
regex = "1.3.1"
@@ -26,5 +27,9 @@ serde_json = "~1"
2627
serde_derive = "~1"
2728
syn = { version = "~0.11", features = ["full"] }
2829
tar = "~0.4"
30+
toml = "0.5.6"
2931
url = "2.1.1"
30-
void = "1.0.2"
32+
void = "1.0.2"
33+
34+
[dev-dependencies]
35+
tempfile = "3.1.0"
File renamed without changes.
File renamed without changes.
File renamed without changes.
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
Async Search APIs
2+
3+
[Async search APIs](https://www.elastic.co/guide/en/elasticsearch/reference/master/async-search.html)
4+
let you asynchronously execute a search request, monitor its progress, and retrieve
5+
partial results as they become available.

api_generator/docs/namespaces/cat.md

Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
Cat APIs
2+
3+
The [Cat APIs](https://www.elastic.co/guide/en/elasticsearch/reference/master/cat.html) aim to
4+
meet the needs of humans when looking at data returned from Elasticsearch,
5+
formatting it as compact, column aligned text, making it easier on human eyes.
6+
7+
# Plain text responses
8+
9+
By default, all Cat APIs are configured to send requests with `text/plain` content-type
10+
and accept headers, returning plain text responses
11+
12+
```rust,no_run
13+
# use elasticsearch::{Elasticsearch, Error, SearchParts};
14+
# use url::Url;
15+
# use elasticsearch::auth::Credentials;
16+
# use serde_json::{json, Value};
17+
# async fn doc() -> Result<(), Box<dyn std::error::Error>> {
18+
# let client = Elasticsearch::default();
19+
let response = client
20+
.cat()
21+
.nodes()
22+
.send()
23+
.await?;
24+
25+
let response_body = response.text().await?;
26+
# Ok(())
27+
# }
28+
```
29+
30+
# JSON responses
31+
32+
JSON responses can be returned from Cat APIs either by using `.format("json")`
33+
34+
```rust,no_run
35+
# use elasticsearch::{Elasticsearch, Error, SearchParts};
36+
# use url::Url;
37+
# use elasticsearch::auth::Credentials;
38+
# use serde_json::{json, Value};
39+
# async fn doc() -> Result<(), Box<dyn std::error::Error>> {
40+
# let client = Elasticsearch::default();
41+
let response = client
42+
.cat()
43+
.nodes()
44+
.format("json")
45+
.send()
46+
.await?;
47+
48+
let response_body = response.json::<Value>().await?;
49+
# Ok(())
50+
# }
51+
```
52+
53+
Or by setting an accept header using `.headers()`
54+
55+
```rust,no_run
56+
# use elasticsearch::{Elasticsearch, Error, SearchParts, http::headers::{HeaderValue, DEFAULT_ACCEPT, ACCEPT}};
57+
# use url::Url;
58+
# use serde_json::{json, Value};
59+
# async fn doc() -> Result<(), Box<dyn std::error::Error>> {
60+
# let client = Elasticsearch::default();
61+
let response = client
62+
.cat()
63+
.nodes()
64+
.header(ACCEPT, HeaderValue::from_static(DEFAULT_ACCEPT))
65+
.send()
66+
.await?;
67+
68+
let response_body = response.json::<Value>().await?;
69+
# Ok(())
70+
# }
71+
```
72+
73+
# Column Headers
74+
75+
The column headers to return can be controlled with `.h()`
76+
77+
```rust,no_run
78+
# use elasticsearch::{Elasticsearch, Error, SearchParts};
79+
# use url::Url;
80+
# use serde_json::{json, Value};
81+
# async fn doc() -> Result<(), Box<dyn std::error::Error>> {
82+
# let client = Elasticsearch::default();
83+
let response = client
84+
.cat()
85+
.nodes()
86+
.h(&["ip", "port", "heapPercent", "name"])
87+
.send()
88+
.await?;
89+
90+
let response_body = response.json::<String>().await?;
91+
# Ok(())
92+
# }
93+
```
94+

api_generator/docs/namespaces/ccr.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
Cross-cluster Replication APIs
2+
3+
[Enable replication of indices in remote clusters to a local cluster](https://www.elastic.co/guide/en/elasticsearch/reference/master/xpack-ccr.html).
4+
This functionality can be used in some common production use cases:
5+
6+
- Disaster recovery in case a primary cluster fails. A secondary cluster can serve as a hot backup
7+
- Geo-proximity so that reads can be served locally
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
Cluster APIs
2+
3+
[Manage settings](https://www.elastic.co/guide/en/elasticsearch/reference/master/cluster.html),
4+
perform operations, and retrieve information about an Elasticsearch cluster.
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
Dangling Index APIs
2+
3+
If Elasticsearch encounters index data that is absent from the current cluster state,
4+
those indices are considered to be _dangling_. For example, this can happen if you delete
5+
more than `cluster.indices.tombstones.size` number of indices while an Elasticsearch node
6+
is offline.
7+
8+
The dangling indices APIs can list, import and delete dangling indices.
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
Enrich APIs
2+
3+
Manage [enrich policies](https://www.elastic.co/guide/en/elasticsearch/reference/master/ingest-enriching-data.html#enrich-policy)
4+
that can be used by the [enrich processor](https://www.elastic.co/guide/en/elasticsearch/reference/master/enrich-processor.html)
5+
as part of an [ingest pipeline](../ingest/index.html), to add data from your existing indices
6+
to incoming documents during ingest.

0 commit comments

Comments
 (0)