-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the motivation for the index-based format? #35
Comments
Given that most use cases of the API are centered around server-side aggregation, we wanted to bake in a simple form of structural compression that enables most sites to avoid manually implementing common forms of payload size optimization (e.g. string and stack deduplication), while still being easy to read on the client side if necessary. Most of this design was inspired by Chrome and Gecko's trace format convergence. |
If compression is really the goal, I suspect that you may have higher uptake of the usage mode you hope for if you include output of the binary format in this API, or at least popularize a driver library in conjunction with this proposal. Otherwise, I imagine many sites will just use |
Sounds good. We're looking into potentially open-sourcing a companion library leveraging With regards to structural compression -- I actually was referring to the spec's behaviour of performing stack and frame deduplication in the outputted trace. The formal processing model will make this more clear, but the idea is that samples with the same stack (a common case) will reuse the same stack IDs. Stacks are recursively defined as well, such that they can share parent stack IDs. |
I can understand, in a binary format, using certain arrays of strings, separate from indexed references to them. But the current proposal uses this in its JavaScript API as well, in the ProfilerTrace dictionary and nested structures. I would have expected to see strings directly used in such dictionaries instead. Is this motivated by compression, or ease of certain kinds of processing, or something else?
The text was updated successfully, but these errors were encountered: