-
Notifications
You must be signed in to change notification settings - Fork 452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(api): track request content length #3316
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🙌🏻 Excited that we'll have this.
Just the one question about dumping bytes. It's usually the more standard way to track metrics on data transfer, and I noticed in the datapoints for the traces in the top of the page that the traces showed a lot of < 1 values.
const contentLength = req.header('content-length'); | ||
if (contentLength) { | ||
const int = parseInt(contentLength, 10); | ||
active.setTag('http.request.content_length', `${(int / 1024).toPrecision(2)}kb`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious - why not just dump bytes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have formatted it for readability, reading bytes can be confusing IMO. Happy to log the raw value
You mean listening at express level |
@bodinsamuel No, I just meant usually in o11y pipelines you'd always dump bytes, not kilobytes. Like in the otel semantic conventions you can see that here: https://opentelemetry.io/docs/specs/semconv/http/http-metrics/#metric-httpclientresponsebodysize Adding the suffix definitely helps there, and I'm down for whatever. Was just a trivial bit of feedback. |
@@ -13,7 +14,15 @@ export function asyncWrapper<TEndpoint extends Endpoint<any>, Locals extends Rec | |||
): RequestHandler<any, TEndpoint['Reply'], any, any, any> { | |||
return (req, res, next) => { | |||
const active = tracer.scope().active(); | |||
active?.setTag('http.route', req.route?.path || req.originalUrl); | |||
if (active) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is a hooks attributes for the ddtrace express plugin https://datadoghq.dev/dd-trace-js/interfaces/export_.plugins.express.html#hooks
could we use this instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's handy to have it here (or at least in the other code) because it's shared across multiple services, otherwise, I need to copy the same code in each tracer.ts
. Wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need it for all services (jobs, orchestrator) or just for server though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
regarding yesterday incidents I think it would be great to log them everywhere, we are a bit in the blind regarding the payload that are passed by customers
if (contentLength) { | ||
const int = parseInt(contentLength, 10); | ||
active.setTag('http.request.content_length', `${(int / 1024).toPrecision(2)}kb`); | ||
metrics.histogram(metrics.Types.API_REQUEST_CONTENT_LENGTH, int); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure what the metric gives us. when would it be useful?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not possible to graph the tag, but we can graph the volume of this metric so we can tie it to outage (if any)
Changes
Fixes https://linear.app/nango/issue/NAN-2540/track-request-content-length
Inside traces as tag and as a metric to be able to graph it properly. I'm not the best at math so hopefully the computation is correct.