Conversation
eacd218 to
df09411
Compare
misode
left a comment
There was a problem hiding this comment.
I am not convinced that we need this. On the surface, using bigint for long ranges might seem cleaner and better, but the reality is that there is no clean way to implement this, while keeping mcdoc types serializable to JSON. I really dislike the special $$type JSON format and rawJSON patching.
I understand there is a benefit in using bigint for these, but I don't think that weighs up against all the disadvantages.
The |
IMO we move all long literals to BigInt, and proceed with the As for implementation path, I agree with SPG here, we should just use the built in |
|
To clarify, my point was that it's also possible to avoid the As an illustrative example (not that I am for or against it), we could stringify all JavaScript function wokeStringify(data) {
return JSON.stringify(data, (_key, value) => {
if (typeof value === 'bigint') {
return JSON.rawJSON(value.toString())
} else if (typeof value === 'number') {
let valStr = value.toString()
if (!valStr.includes('.')) {
valStr = `${valStr}.0`
}
return JSON.rawJSON(valStr)
} else {
return value
}
})
}
function wokeParse(text) {
return JSON.parse(text, (_key, value, { source }) => {
if (typeof value === 'number') {
return source.includes('.')
? value
: BigInt(source)
}
return value
})
}This allows us to parse and stringify BigInt loselessly from/to JSON: > const thefuck = { foo: 10000000000000000000000000000000000000000000000000000001n, bar: 2, baz: { qux: 6.7 } }
undefined
> wokeStringify(thefuck)
'{"foo":10000000000000000000000000000000000000000000000000000001,"bar":2.0,"baz":{"qux":6.7}}'
> wokeParse(wokeStringify(thefuck))
{
foo: 10000000000000000000000000000000000000000000000000000001n,
bar: 2,
baz: { qux: 6.7 }
}This comes with the problem that semantic integers that use the |
|
Oh thats actually a pretty clean solution, I was envisioning something way more cursed |
The raw json patching needs to be solved, because we are using this as a library. I don't see any disadvantage in special encoding like this for internal caches that are non-userfacing. Could you elaborate? I on the contrary strongly dislike saving JSON floats for integers, that's just semantically completely wrong and feels hacky/relies on this being done correctly (i.e. if the .0 is missing for some reason, you have a corrupt cache that will cause exceptions down the line. This would right now happen with existing caches from old versions). In my case, it's explicit when you want a bigint, and retrieving a number instead doesn't break anything. |
|
I updated this now to the following:
|
| */ | ||
| export function bigintJsonNumberReplacer(_key: string, value: any) { | ||
| return typeof value === 'bigint' | ||
| ? (<any> JSON).rawJSON(value.toString()) |
There was a problem hiding this comment.
Modern Typescript uses JSON as any. <any> JSON is not usually used and I had to look up the syntax lol
| * @param value The value to encode | ||
| * @returns Replaced value | ||
| */ | ||
| export function bigintJsonLoslessReplacer(_key: string, value: any) { |
There was a problem hiding this comment.
The spelling is Lossless btw
|
Why do we have the "lossless" replacer/reviver when we have these? export function bigintJsonNumberReplacer(_key: string, value: any) {
return typeof value === 'bigint'
? (<any> JSON).rawJSON(value.toString())
: value
}
export function bigintJsonNumberReviver(_key: string, value: any, data?: { source?: string }) {
return typeof value === 'number'
&& data?.source !== undefined
&& !data.source.includes('.')
&& value.toString() !== data.source
? BigInt(data.source)
: value
}Are we trying to support environments that do not support |
|
Have you read my comment above your review? If yes please explain what you do not understand about what I am saying there. |
|
I hadn't read your entire comment, thats my bad. I still think its fine for the cache to lose the fact that a long that fits inside a JS number was initially created as a |

Mcdoc long literals, and ranges on long types are now read as a bigint, retaining maximum precision.
A
toJSONfunction is patched onto the BigInt class according to the best practices outlined here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/BigInt#use_within_json and here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Errors/BigInt_not_serializable#providing_a_custom_serialization_methodPer default, any bigint will be serialized as a raw JSON number. Consumers of the JSON need to read integers without precision loss in order to benefit from this. If this is not wanted and should e.g. be serialized as a JSON string instead, this should be done explicitly in this case.
The SymbolTable has been altered to be able to serialize and deserialize bigints on its data field consistently. For this purpose, during
unlinkall bigints in the data field will be replaced with an object of the following structure:{"$$type": "bigint", "value": "123456789"}.When
linking an unlinked table (or a deserialized table), this replacement is undone again, retaining the original structure.The SymbolTable serialization is important for our cache. If it stayed untouched, the precision would be lost by JS's JSON.parse implementation, resulting in no benefit after a cache write and read cycle.