Skip to content

Commit affe782

Browse files
committed
Make links work again inside tips
1 parent 84ed8e5 commit affe782

File tree

24 files changed

+192
-96
lines changed

24 files changed

+192
-96
lines changed

docs/commands/super-db.md

Lines changed: 28 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,8 @@ title: super db
1515
1616
<p id="status"></p>
1717

18-
{{< tip "Status" >}}
18+
{{% tip "Status" %}}
19+
1920
While [`super`](super.md) and its accompanying [formats](../formats/_index.md)
2021
are production quality, the SuperDB data lake is still fairly early in development
2122
and alpha quality.
@@ -25,7 +26,8 @@ is deployed to manage the lake's data layout via the
2526
[lake API](../lake/api.md).
2627

2728
Enhanced scalability with self-tuning configuration is under development.
28-
{{< /tip >}}
29+
30+
{{% /tip %}}
2931

3032
## The Lake Model
3133

@@ -153,7 +155,8 @@ running any `super db` lake command all pointing at the same storage endpoint
153155
and the lake's data footprint will always remain consistent as the endpoints
154156
all adhere to the consistency semantics of the lake.
155157

156-
{{< tip "Caveat" >}}
158+
{{% tip "Caveat" %}}
159+
157160
Data consistency is not fully implemented yet for
158161
the S3 endpoint so only single-node access to S3 is available right now,
159162
though support for multi-node access is forthcoming.
@@ -164,7 +167,8 @@ access to a local file system has been thoroughly tested and should be
164167
deemed reliable, i.e., you can run a direct-access instance of `super db` alongside
165168
a server instance of `super db` on the same file system and data consistency will
166169
be maintained.
167-
{{< /tip >}}
170+
171+
{{% /tip %}}
168172

169173
### Locating the Lake
170174

@@ -206,11 +210,13 @@ Each commit object is assigned a global ID.
206210
Similar to Git, commit objects are arranged into a tree and
207211
represent the entire commit history of the lake.
208212

209-
{{< tip "Note" >}}
213+
{{% tip "Note" %}}
214+
210215
Technically speaking, Git can merge from multiple parents and thus
211216
Git commits form a directed acyclic graph instead of a tree;
212217
SuperDB does not currently support multiple parents in the commit object history.
213-
{{< /tip >}}
218+
219+
{{% /tip %}}
214220

215221
A branch is simply a named pointer to a commit object in the lake
216222
and like a pool, a branch name can be any valid UTF-8 string.
@@ -272,10 +278,12 @@ key. For example, on a pool with pool key `ts`, the query `ts == 100`
272278
will be optimized to scan only the data objects where the value `100` could be
273279
present.
274280

275-
{{< tip "Note" >}}
281+
{{% tip "Note" %}}
282+
276283
The pool key will also serve as the primary key for the forthcoming
277284
CRUD semantics.
278-
{{< /tip >}}
285+
286+
{{% /tip %}}
279287

280288
A pool also has a configured sort order, either ascending or descending
281289
and data is organized in the pool in accordance with this order.
@@ -325,9 +333,11 @@ using that pool's "branches log" in a similar fashion, then its corresponding
325333
commit object can be used to construct the data of that branch at that
326334
past point in time.
327335

328-
{{< tip "Note" >}}
336+
{{% tip "Note" %}}
337+
329338
Time travel using timestamps is a forthcoming feature.
330-
{{< /tip >}}
339+
340+
{{% /tip %}}
331341

332342
## `super db` Commands
333343

@@ -407,11 +417,13 @@ the [special value `this`](../language/pipeline-model.md#the-special-value-this)
407417

408418
A newly created pool is initialized with a branch called `main`.
409419

410-
{{< tip "Note" >}}
420+
{{% tip "Note" %}}
421+
411422
Lakes can be used without thinking about branches. When referencing a pool without
412423
a branch, the tooling presumes the "main" branch as the default, and everything
413424
can be done on main without having to think about branching.
414-
{{< /tip >}}
425+
426+
{{% /tip %}}
415427

416428
### Delete
417429
```
@@ -582,9 +594,11 @@ that is stored in the commit journal for reference. These values may
582594
be specified as options to the [`load`](#load) command, and are also available in the
583595
[lake API](../lake/api.md) for automation.
584596

585-
{{< tip "Note" >}}
597+
{{% tip "Note" %}}
598+
586599
The branchlog meta-query source is not yet implemented.
587-
{{< /tip >}}
600+
601+
{{% /tip %}}
588602

589603
### Ls
590604
```

docs/commands/super.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -187,13 +187,15 @@ not desirable because (1) the Super JSON parser is not particularly performant a
187187
(2) all JSON numbers are floating point but the Super JSON parser will parse as
188188
JSON any number that appears without a decimal point as an integer type.
189189

190-
{{< tip "Note" >}}
190+
{{% tip "Note" %}}
191+
191192
The reason `super` is not particularly performant for Super JSON is that the [Super Binary](../formats/bsup.md) or
192193
[Super Columnar](../formats/csup.md) formats are semantically equivalent to Super JSON but much more efficient and
193194
the design intent is that these efficient binary formats should be used in
194195
use cases where performance matters. Super JSON is typically used only when
195196
data needs to be human-readable in interactive settings or in automated tests.
196-
{{< /tip >}}
197+
198+
{{% /tip %}}
197199

198200
To this end, `super` uses a heuristic to select between Super JSON and plain JSON when the
199201
`-i` option is not specified. Specifically, plain JSON is selected when the first values

docs/formats/bsup.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,8 @@ size decompression buffers in advance of decoding.
130130
Values for the `format` byte are defined in the
131131
[Super Binary compression format specification](./compression.md).
132132

133-
{{< tip "Note" >}}
133+
{{% tip "Note" %}}
134+
134135
This arrangement of frames separating types and values allows
135136
for efficient scanning and parallelization. In general, values depend
136137
on type definitions but as long as all of the types are known when
@@ -143,7 +144,8 @@ heuristics, e.g., knowing a filtering predicate can't be true based on a
143144
quick scan of the data perhaps using the Boyer-Moore algorithm to determine
144145
that a comparison with a string constant would not work for any
145146
value in the buffer.
146-
{{< /tip >}}
147+
148+
{{% /tip %}}
147149

148150
Whether the payload was originally uncompressed or was decompressed, it is
149151
then interpreted according to the `T` bits of the frame code as a
@@ -211,12 +213,14 @@ is further encoded as a "counted string", which is the `uvarint` encoding
211213
of the length of the string followed by that many bytes of UTF-8 encoded
212214
string data.
213215

214-
{{< tip "Note" >}}
216+
{{% tip "Note" %}}
217+
215218
As defined by [Super JSON](jsup.md), a field name can be any valid UTF-8 string much like JSON
216219
objects can be indexed with arbitrary string keys (via index operator)
217220
even if the field names available to the dot operator are restricted
218221
by language syntax for identifiers.
219-
{{< /tip >}}
222+
223+
{{% /tip %}}
220224

221225
The type ID follows the field name and is encoded as a `uvarint`.
222226

docs/formats/csup.md

Lines changed: 36 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -64,12 +64,14 @@ then write the metadata into the reassembly section along with the trailer
6464
at the end. This allows a stream to be converted to a Super Columnar file
6565
in a single pass.
6666

67-
{{< tip "Note" >}}
67+
{{% tip "Note" %}}
68+
6869
That said, the layout is
6970
flexible enough that an implementation may optimize the data layout with
7071
additional passes or by writing the output to multiple files then
7172
merging them together (or even leaving the Super Columnar entity as separate files).
72-
{{< /tip >}}
73+
74+
{{% /tip %}}
7375

7476
### The Data Section
7577

@@ -85,17 +87,20 @@ There is no information in the data section for how segments relate
8587
to one another or how they are reconstructed into columns. They are just
8688
blobs of Super Binary data.
8789

88-
{{< tip "Note" >}}
90+
{{% tip "Note" %}}
91+
8992
Unlike Parquet, there is no explicit arrangement of the column chunks into
9093
row groups but rather they are allowed to grow at different rates so a
9194
high-volume column might be comprised of many segments while a low-volume
9295
column must just be one or several. This allows scans of low-volume record types
9396
(the "mice") to perform well amongst high-volume record types (the "elephants"),
9497
i.e., there are not a bunch of seeks with tiny reads of mice data interspersed
9598
throughout the elephants.
96-
{{< /tip >}}
9799

98-
{{< tip "TBD" >}}
100+
{{% /tip %}}
101+
102+
{{% tip "TBD" %}}
103+
99104
The mice/elephants model creates an interesting and challenging layout
100105
problem. If you let the row indexes get too far apart (call this "skew"), then
101106
you have to buffer very large amounts of data to keep the column data aligned.
@@ -109,15 +114,17 @@ if you use lots of buffering on ingest, you can write the mice in front of the
109114
elephants so the read path requires less buffering to align columns. Or you can
110115
do two passes where you store segments in separate files then merge them at close
111116
according to an optimization plan.
112-
{{< /tip >}}
117+
118+
{{% /tip %}}
113119

114120
### The Reassembly Section
115121

116122
The reassembly section provides the information needed to reconstruct
117123
column streams from segments, and in turn, to reconstruct the original values
118124
from column streams, i.e., to map columns back to composite values.
119125

120-
{{< tip "Note" >}}
126+
{{% tip "Note" %}}
127+
121128
Of course, the reassembly section also provides the ability to extract just subsets of columns
122129
to be read and searched efficiently without ever needing to reconstruct
123130
the original rows. How well this performs is up to any particular
@@ -127,7 +134,8 @@ Also, the reassembly section is in general vastly smaller than the data section
127134
so the goal here isn't to express information in cute and obscure compact forms
128135
but rather to represent data in an easy-to-digest, programmer-friendly form that
129136
leverages Super Binary.
130-
{{< /tip >}}
137+
138+
{{% /tip %}}
131139

132140
The reassembly section is a Super Binary stream. Unlike Parquet,
133141
which uses an externally described schema
@@ -147,9 +155,11 @@ A super type's integer position in this sequence defines its identifier
147155
encoded in the [super column](#the-super-column). This identifier is called
148156
the super ID.
149157

150-
{{< tip "Note" >}}
158+
{{% tip "Note" %}}
159+
151160
Change the first N values to type values instead of nulls?
152-
{{< /tip >}}
161+
162+
{{% /tip %}}
153163

154164
The next N+1 records contain reassembly information for each of the N super types
155165
where each record defines the column streams needed to reconstruct the original
@@ -171,11 +181,13 @@ type signature:
171181
In the rest of this document, we will refer to this type as `<segmap>` for
172182
shorthand and refer to the concept as a "segmap".
173183

174-
{{< tip "Note" >}}
184+
{{% tip "Note" %}}
185+
175186
We use the type name "segmap" to emphasize that this information represents
176187
a set of byte ranges where data is stored and must be read from *rather than*
177188
the data itself.
178-
{{< /tip >}}
189+
190+
{{% /tip %}}
179191

180192
#### The Super Column
181193

@@ -216,11 +228,13 @@ This simple top-down arrangement, along with the definition of the other
216228
column structures below, is all that is needed to reconstruct all of the
217229
original data.
218230

219-
{{< tip "Note" >}}
231+
{{% tip "Note" %}}
232+
220233
Each row reassembly record has its own layout of columnar
221234
values and there is no attempt made to store like-typed columns from different
222235
schemas in the same physical column.
223-
{{< /tip >}}
236+
237+
{{% /tip %}}
224238

225239
The notation `<any_column>` refers to any instance of the five column types:
226240
* [`<record_column>`](#record-column),
@@ -296,9 +310,11 @@ in the same column order implied by the union type, and
296310
* `tags` is a column of `int32` values where each subsequent value encodes
297311
the tag of the union type indicating which column the value falls within.
298312

299-
{{< tip "TBD" >}}
313+
{{% tip "TBD" %}}
314+
300315
Change code to conform to columns array instead of record{c0,c1,...}
301-
{{< /tip >}}
316+
317+
{{% /tip %}}
302318

303319
The number of times each value of `tags` appears must equal the number of values
304320
in each respective column.
@@ -350,14 +366,16 @@ data in the file,
350366
it will typically fit comfortably in memory and it can be very fast to scan the
351367
entire reassembly structure for any purpose.
352368

353-
{{< tip "Example" >}}
369+
{{% tip "Example" %}}
370+
354371
For a given query, a "scan planner" could traverse all the
355372
reassembly records to figure out which segments will be needed, then construct
356373
an intelligent plan for reading the needed segments and attempt to read them
357374
in mostly sequential order, which could serve as
358375
an optimizing intermediary between any underlying storage API and the
359376
Super Columnar decoding logic.
360-
{{< /tip >}}
377+
378+
{{% /tip %}}
361379

362380
To decode the "next" row, its schema index is read from the root reassembly
363381
column stream.

docs/install.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,11 +40,13 @@ This installs the `super` binary in your `$GOPATH/bin`.
4040

4141
Once installed, run a [quick test](#quick-tests).
4242

43-
{{< tip "Note" >}}
43+
{{% tip "Note" %}}
44+
4445
If you don't have Go installed, download and install it from the
4546
[Go install page](https://golang.org/doc/install). Go 1.23 or later is
4647
required.
47-
{{< /tip >}}
48+
49+
{{% /tip %}}
4850

4951
## Quick Tests
5052

docs/integrations/amazon-s3.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,11 +16,13 @@ You must specify an AWS region via one of the following:
1616
You can create `~/.aws/config` by installing the
1717
[AWS CLI](https://aws.amazon.com/cli/) and running `aws configure`.
1818

19-
{{< tip "Note" >}}
19+
{{% tip "Note" %}}
20+
2021
If using S3-compatible storage that does not recognize the concept of regions,
2122
a region must still be specified, e.g., by providing a dummy value for
2223
`AWS_REGION`.
23-
{{< /tip >}}
24+
25+
{{% /tip %}}
2426

2527
## Credentials
2628

docs/integrations/fluentd.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -81,13 +81,15 @@ The default settings when running `zed create` set the
8181
field and sort the stored data in descending order by that key. This
8282
configuration is ideal for Zeek log data.
8383

84-
{{< tip "Note" >}}
84+
{{% tip "Note" %}}
85+
8586
The [Zui](https://zui.brimdata.io/) desktop application automatically starts a
8687
Zed lake service when it launches. Therefore if you are using Zui you can
8788
skip the first set of commands shown above. The pool can be created from Zui
8889
by clicking **+**, selecting **New Pool**, then entering `ts` for the
8990
[pool key](../commands/super-db.md#pool-key).
90-
{{< /tip >}}
91+
92+
{{% /tip %}}
9193

9294
### Fluentd
9395

@@ -366,15 +368,17 @@ leverage, you can reduce the lake's storage footprint by periodically running
366368
storage that contain the granular commits that have already been rolled into
367369
larger objects by compaction.
368370

369-
{{< tip "Note" >}}
371+
{{% tip "Note" %}}
372+
370373
As described in issue [super/4934](https://github.com/brimdata/super/issues/4934),
371374
even after running `zed vacuum`, some files related to commit history are
372375
currently still left behind below the lake storage path. The issue describes
373376
manual steps that can be taken to remove these files safely, if desired.
374377
However, if you find yourself needing to take these steps in your environment,
375378
please [contact us](#contact-us) as it will allow us to boost the priority
376379
of addressing the issue.
377-
{{< /tip >}}
380+
381+
{{% /tip %}}
378382

379383
## Ideas For Enhancement
380384

0 commit comments

Comments
 (0)