Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 5 additions & 0 deletions saffron/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -46,10 +46,15 @@ tracing-subscriber = { version = "0.3", features = [

[dev-dependencies]
ark-std.workspace = true
criterion = { workspace = true, features = ["html_reports"] }
ctor = "0.2"
proptest.workspace = true
once_cell.workspace = true

[[bin]]
name = "saffron-og-flow"
path = "og-flow/main.rs"

[[bench]]
name = "read_proof_bench"
harness = false
117 changes: 117 additions & 0 deletions saffron/benches/read_proof_bench.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
//! Run this bench using `cargo criterion -p saffron --bench read_proof_bench`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

$ cargo criterion -p saffron --bench read_proof_bench
error: no such command: `criterion`

        View all installed commands with `cargo --list`
        Find a package to install `criterion` with `cargo search cargo-criterion`

Is there something I am missing to run this command ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cargo install cargo-criterion


use ark_ff::{One, UniformRand, Zero};
use ark_poly::{univariate::DensePolynomial, Evaluations};
use criterion::{black_box, criterion_group, criterion_main, BatchSize, Criterion};
use kimchi::{circuits::domains::EvaluationDomains, groupmap::GroupMap};
use mina_curves::pasta::{Fp, Vesta};
use once_cell::sync::Lazy;
use poly_commitment::{commitment::CommitmentCurve, ipa::SRS, SRS as _};
use rand::rngs::OsRng;
use saffron::{
env,
read_proof::{prove, verify},
ScalarField, SRS_SIZE,
};

// Set up static resources to avoid re-computation during benchmarks
static SRS: Lazy<SRS<Vesta>> = Lazy::new(|| {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe using pub fn get_srs_test<G>() -> SRS<G> from kimchi/src/precomputed_srs.rs would be a more canonical way?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems ok, but only because we have the same size. I was first thinking about making the change, but wondering now if it is the best as Saffron and Kimchi are too different protocols. We might want to differentiate them?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opened #3208 but I am not totally convinced we should do it. Feel free to accept if you have a strong opinion.

if let Ok(srs) = std::env::var("SRS_FILEPATH") {
env::get_srs_from_cache(srs)
} else {
SRS::create(SRS_SIZE)
}
});

static DOMAIN: Lazy<EvaluationDomains<ScalarField>> =
Lazy::new(|| EvaluationDomains::<ScalarField>::create(SRS_SIZE).unwrap());

static GROUP_MAP: Lazy<<Vesta as CommitmentCurve>::Map> =
Lazy::new(<Vesta as CommitmentCurve>::Map::setup);

fn generate_test_data(
size: usize,
) -> (Vec<ScalarField>, Vec<ScalarField>, Vec<ScalarField>, Vesta) {
let mut rng = o1_utils::tests::make_test_rng(None);

// Generate data with specified size
let data: Vec<ScalarField> = (0..size).map(|_| Fp::rand(&mut rng)).collect();

// Create data commitment
let data_poly: DensePolynomial<ScalarField> =
Evaluations::from_vec_and_domain(data.clone(), DOMAIN.d1).interpolate();
let data_comm: Vesta = SRS.commit_non_hiding(&data_poly, 1).chunks[0];

// Generate query (about 10% of positions will be queried)
let query: Vec<ScalarField> = (0..size)
.map(|_| {
if rand::random::<f32>() < 0.1 {
Fp::one()
} else {
Fp::zero()
}
})
.collect();

// Compute answer as data * query
let answer: Vec<ScalarField> = data.iter().zip(query.iter()).map(|(d, q)| *d * q).collect();

(data, query, answer, data_comm)
}

fn bench_read_proof_prove(c: &mut Criterion) {
let (data, query, answer, data_comm) = generate_test_data(SRS_SIZE);

let description = format!("prove size {}", SRS_SIZE);
c.bench_function(description.as_str(), |b| {
b.iter_batched(
|| OsRng,
|mut rng| {
black_box(prove(
*DOMAIN,
&SRS,
&GROUP_MAP,
&mut rng,
data.as_slice(),
query.as_slice(),
answer.as_slice(),
&data_comm,
))
},
BatchSize::NumIterations(10),
)
});
}

fn bench_read_proof_verify(c: &mut Criterion) {
let (data, query, answer, data_comm) = generate_test_data(SRS_SIZE);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A word of caution: the default kimchi prover/verifier bench is quite inaccurate, so you might need to set up bench parameters manually if you want to achieve meaningfully low noise.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not understand why you are mentioning the kimchi prover/verifier here. Can you explain?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect it will largely generalise, only mentioned it as a particular example. I guess on average it's hard to measure verification precisely (because of multi-threading? I'm not sure why actually).


// Create proof first
let mut rng = OsRng;
let proof = prove(
*DOMAIN,
&SRS,
&GROUP_MAP,
&mut rng,
data.as_slice(),
query.as_slice(),
answer.as_slice(),
&data_comm,
);

let description = format!("verify size {}", SRS_SIZE);
c.bench_function(description.as_str(), |b| {
b.iter_batched(
|| OsRng,
|mut rng| {
black_box(verify(
*DOMAIN, &SRS, &GROUP_MAP, &mut rng, &data_comm, &proof,
))
},
BatchSize::SmallInput,
)
});
}

criterion_group!(benches, bench_read_proof_prove, bench_read_proof_verify);
criterion_main!(benches);
Loading