Skip to content

Commit 7da1658

Browse files
committed
docs: Added website article about the boxing systems
1 parent 718b924 commit 7da1658

File tree

2 files changed

+37
-1
lines changed

2 files changed

+37
-1
lines changed

docs/_data/docs.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,4 +34,5 @@
3434
- bitwriter-bitreader-bitstream
3535
- custom-transports
3636
- network-profiler-window
37-
- custom-serialization
37+
- custom-serialization
38+
- boxing-systems
Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
---
2+
title: Boxing Systems
3+
permalink: /wiki/boxing-systems/
4+
---
5+
6+
While the MLAPI proudly puts itself forward as a performance API, one might get suprised over the amount of operations that are boxing and the amount of reflection used in the MLAPI. This page aims to explain where boxing takes place and why we justify it. **This is an advanced article for performance freaks.**
7+
8+
9+
### Where
10+
The first question is, where do we box or use reflection.
11+
12+
##### Convenience RPCs
13+
Convenience RPCs box all their parameters and uses reflection to invoke the target method (lookup is only done once).
14+
##### Custom Serialization Handlers
15+
All custom serialization handlers are exposed by the API as generics, but behind the scenes the values get boxed. However they are never called by reflection.
16+
##### Default NetworkedVar Implementations
17+
All default NetworkedVar implementations box their values when writing and reading, no reflection is used however.
18+
19+
20+
### Why
21+
The second question is why we box and use reflection. Before that it's important to note that all of the above have alternatives that doesnt utilize reflection or boxing. Performance RPC's dont box and dont use reflection. Custom NetworkedVar containers doesnt box or use reflection. Back to the convenience API, lets start with boxing.
22+
23+
To use fast collections, we need compile time known types. The only way around this is to use a weaver. Onto the next question, why dont we weave the code?
24+
25+
The answer is fairly simple, it's messy and the advantage is minimal.
26+
27+
#### Performance
28+
At the time of writing, the MLAPI is much more feature complete than competitors, offering a much more complex serialization pipeline with encryption, authentication, targeting and more. Despite this, the MLAPI has been optimized to run blazing fast. Running 1 milion RPCs and comparing the results with the currently largest competitors Mirror and UNET which both utilize weavers, the MLAPI is always more than 10% faster when using its convenience RPCs, and the performance RPCs are unmatched reaching about 30% faster than both the other libraries.
29+
30+
#### Clarity
31+
Currently, weavers are dirty. They all require knowledge of IL and are very unclear to develop. Both libraries that today utilize a Weaver have a dirty, 1000+ line weaver codebase that only a few people know how to even edit, it's a pain to maintain. It also puts a high barrier of entry for contributing and maintaining in the future, makes it harder to debug etc.
32+
33+
34+
### Conclusion
35+
The MLAPI utilizes boxing and reflection where it will have minimal impact while still being invisible to the user. The MLAPI is all about options. If you need to get your code to run faster, you can use the performance alternatives. You will probably never notice the effects of the value boxing or reflection invocation.

0 commit comments

Comments
 (0)