Skip to content

Conversation

@lifenjoiner
Copy link
Member

Exclude servers/relays that have been removed from the sources, had their privacy level lowered, or IPv6 has became unavailable.

Don't need to shuffle proxy.registeredServers, update/10min; shuffle ServersInfo.registeredServers is enough.

PS: Distinguishing updating existing and adding new items increases complexity.

@jedisct1
Copy link
Member

Besides the complexity, I'm worried about about data races and slice corruption.

That looks risky. Maybe this is correct but I'm unable to understand that code.

@lifenjoiner
Copy link
Member Author

Yes, we use slice, and the working model is:

  1. serversInfo.registeredServers/serversInfo.registeredRelays

updateRegisteredServers():
sources --> New RegisteredServers --> proxy.registeredServers/proxy.registeredRelays, new or fallback --> copy --> serversInfo.registeredServers/serversInfo.registeredRelays, Lock()

refresh():
serversInfo.registeredServers/serversInfo.registeredRelays, RLock() --> copy --> serversInfo.inner of type ServersInfo, new or fallback

type RegisteredServer struct {
	name        string
	stamp       stamps.ServerStamp
	description string
}
type ServerStamp struct {
	ServerAddrStr string
	ServerPk      []uint8
	Hashes        [][]uint8
	ProviderName  string
	Path          string
	Props         ServerInformalProperties
	Proto         StampProtoType
}

type ServerInformalProperties uint64

type StampProtoType uint8

ServerPk and Hashes are nested slices, while other fields are safely copied. They are not deeply/hard copied, but referred to.

serversInfo.registeredServers/serversInfo.registeredRelays refer to the new RegisteredServer instances.

stamp.Hashes is used by fetchDoHServerInfo and _fetchODoHTargetInfo.
stamp.ServerPk is used by fetchDNSCryptServerInfo() to generate new ServerInfo.ServerPk as [32]byte, which is holded by serversInfo.inner. So, serversInfo.inner doesn't refer to serversInfo.registeredServers anymore.

Outgoing requests (processIncomingQuery()) refer to the last RegisteredServer instances. Once all references are exhausted, GC can clean them up.

Slices of slices should be correctly tracked by go.

The principle is:
Creating completely new objects and using copy will reduce data races or slice corruption, whereas updating existing ones is more likely to cause issues.

For example:

proxy.registeredServers[i].stamp = registeredServer.stamp

copy(registeredServers, serversInfo.registeredServers)
serversInfo.RUnlock()

As stamp.Hashes and stamp.ServerPk are slices in slices, copy is still not safe enough for being updated RegisteredServers.stamp?

  1. serversInfo.inner

For the first round, refresh() still uses refreshServer(), to make the server available ASAP. In fact, it is a new inner. No updates.

For the following rounds, refresh() will channel goroutine results and then create completely new inner.

Elements of inner are also rebuit when they are updated. inner is safe.

I haven't found any data races or slice corruption in the new solution. Could you point them out to me? I'll make improvements.

@jedisct1
Copy link
Member

This is too complicated, and I won't be able to maintain that. Make she feel like I should abandon the project.

@lifenjoiner
Copy link
Member Author

😲

@lifenjoiner
Copy link
Member Author

I've split the changes into 3 commits. One of them is a new PR #3033.

Exclude servers/relays that have been removed from the sources, had their
privacy level lowered, or IPv6 has became unavailable while enabled IPv6
servers.
Exclude servers/relays that have been removed from the sources, had their
privacy level lowered, or IPv6 has became unavailable while enabled IPv6
servers.
@lifenjoiner
Copy link
Member Author

Optimize for clear logic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants