-
Notifications
You must be signed in to change notification settings - Fork 0
queries
insert into headers (hash, flags, height, mtp, parent_id) values (...);
update headers set candidate = true where id = ...;
or
With libbitcoin database we set_link and then push_candidate
With SQL we can do upserts as below. This reduces the two calls to the database into a single call.
insert into headers (hash, flags, height, mtp, parent_id) values (...)
on conflict (hash) do update set candidate = true;
insert into headers (hash, flags, height, mtp, parent_id) values (...);
We have to save the input and the relationships to lookup prevouts. Therefore we put them in a transaction and make sure this is atomic and the lookups can scale to required.
BEGIN;
-- save the returned id in application variable
insert into outputs (output_index, value, script) values (0, 100, "dup verify") returning id;
-- save the returned id in application variable
insert into inputs (input_index, sequence, script) values (0, 1, "checksig") returning id;
-- use the ids received above
insert into prevouts (input_id, output_id) values(...);
COMMIT;
We can use both ID and hash as the situation requires. We have an index on both.
In comparison to mmap store, we don't need store confirm and valid status in separate append only logs. Here we run with the option to update in place counting on the sql dbms to manage the updates.
If this is a bottleneck, we can optimise this by moving the confirm
and valid metadata to a separate headers_state table tracking this
metadata.
update headers set confirmed = true where id = ...;
update headers set confirmed = true where hash = ...;
get_height,
is_candidate,
is_confirmed_block,
is_confirmed_tx, is_confirmed_input, is_confirmed_output
These are select on id - we also provide the same efficiency for lookup by hash.
to_spenders is used by
is_spent_output, is_spent_prevout and can be queried by point or primary key.
We query the transactions_outputs table to get the output. Using the schema from sqlite-schema.sql, we can query the output as:
select i.* from transactions tx, transactions_outputs txo, prevouts p, inputs i
where tx.hash = <point.hash> and
txo.transaction_id = tx.id and
txo.output_index = <point.index> and
p.output_id = txo.output_id and
i.id = p.input_id;
Expected analysis:
- Index scan
transactions_hash_ix. This is only a million entries, so it should require 50MB of RAM. - Index scan
prevouts_output_ix. This is will be 2b (outputs) * avg number of outputs. Say 4b rows. - Index scan on
inputs_pk. - Join the results of the above index scans.
The problem here is the index scan on prevouts_output_ix. This will
be a ~4b entries. I have seen postgresql handle 2b rows hash index
fitting under 50GB. This index will be close 100GB, definitely won't
be performant on user laptops.
Input: is_mature, is_spent, is_strong, locked_input
This goes back to the get spenders problem. We need to verify the spenders are not spent.
block_confirmable
set_strong
set_unstrong