Skip to content

Useless "Full Results" Link - Critical Context Missing for Decision Makers Evaluating SurrealDB #224

@snewell92

Description

@snewell92

From the blog post any link that says

Click here to see the full results

Takes me to the actions run, and if I am interested in say comparing Neo4j to surreal in any bench, I see this

screenshot Image

Some of the data you show in the blog post has neo4j, others don't. You have editorialized this in a reasonable way, but I have a different context, and different reasons for looking at surreal db, and most of the time I'm making <30sec decisions reading posts like this based on relevant data I can filter. You give me 0 insights beyond the three competitors you choose for a given bench, but from what I gather, you run all of them in all benches, is that right? How do I see that ( without running it myself - which I would do if it were a foregone conclusion one of my teams were going to use surrealdb, as benching on actual azure hardware in the cloud would ofc be part of our due diligence - but my due diligence is eliminating options teams don't have to worry about because they aren't relevant c: ).

This is disappointing to say the least. I have a team at Mews currently using neo4j, and I thought it might be interesting to prototype with surreal db, but I have no way of telling if that is even worth my time.

So, this needs to change if I'm going to take SurrealDB seriously.

  • Provide an interactive build of the full suite of benchmarks
    • use github pages to deploy a SPA/page with chartjs, or something vibe codeable idk, idc, I want to filter and compare my own use cases and tech stacks. not fussed about specifics, but the directional goal of transparency is not achieved by that blog post and the cherry picked comparisons.
    • the more 'provenance' the better (machine specs, timestamps, versions of the other guys, config settings... etc etc.)
    • Opportunity for y'all here to provide long term data on how much better you are getting over time 🔥
    • This may seem like a lot, but I just want you to write down what you've already had to do to get this benchmarking off the ground in the first place. You have to make all these decisions anyway, just let us know what they are.
  • Provide durable links that show raw data
  • In the build, deeper than the blog post, ensure not only a link back the source code (this repo) is presented, but an adequate description of the methodology is provided so stakeholders like myself can understand the context of each benchmark.
    • This is partially done in the "How does it work?" section but I'd appreciate more detail about it. is it C code? rust code? go? JS/TS? what are the specs of the bench?

Thanks for reading and providing an open bench repo, it's a great first step to contextualizing and giving stakeholders more context on when/how to explore surreal.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions