This was raised by @KonradHoeffner #6 (comment)
Similarly, traits whose implementations cannot be safely shared across threads are also a huge hassle, as with RDF you are often operating with large amounts of data where parallelization makes sense. So rather than supporting the theoretical one-in-a-million use case where someone has for example an IRI based on a string type that is somehow not thread safe, I would prefer if that is defined at a trait level (I guess this means adding Send + Sync everywhere?).
I'm slightly against this. First, I'm wary of arguments like "the theoretical one-in-a-million use case". Second, if you need implementations of a trait (say, Triple) to be Send+Sync, the additional burden of writing Triple+Send+Sync seems reasonable enough.
That being said, I share the intuition that those !Send or !Sync implementations would be rare enough, so if there is consensus on forcing our traits to be Send+Sync I would not oppose it.
This was raised by @KonradHoeffner #6 (comment)
I'm slightly against this. First, I'm wary of arguments like "the theoretical one-in-a-million use case". Second, if you need implementations of a trait (say,
Triple) to beSend+Sync, the additional burden of writingTriple+Send+Syncseems reasonable enough.That being said, I share the intuition that those
!Sendor!Syncimplementations would be rare enough, so if there is consensus on forcing our traits to beSend+SyncI would not oppose it.