-
Notifications
You must be signed in to change notification settings - Fork 82
Torq Query Logging Management #573
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
jonathonmcmurray
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Realised that there's a number of scripts that appear to be the same or almost the same as existing scripts for non "query" processes (gw, hdb etc.) - rather than commenting on each one, I'll leave this more general comment. Do we need to duplicate all this code? Can we not just load the regular code, possibly leveraging TorQ's parentproctype functionality? Loading the same code will make maintenance much easier as we make any changes or fixes to processes down the line, and we can always overwrite functions & variables if needed.
Also in PR description,
This realtime and historical data exists within a usage table that may itself be queried on the backend or accessed through a frontend data visualisation tool, as we have illustrated below.
Where's this illustration?
| // queryfeed proc script - subs to .usage.usage tables and publishes to query tickerplant | ||
|
|
||
| // add connections to all procs for query tracking to be enabled | ||
| .servers.CONNECTIONS:.servers.CONNECTIONS,exec distinct proctype from (" SS ";enlist csv) 0: hsym `$getenv `TORQPROCESSES where procname in subprocs; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think subprocs should have a default value in this script, similar to other TorQ scripts (I know it is defined in config, but typically we also have a default to fall back on in the main script itself)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
everything in this script is defined in root namespace, usually in TorQ we put most code in namespaces & keep root namespace reasonably clean
| usnorm:update cmd:-2#'";" vs' cmd from us where user=`gateway; | ||
| usnorm:update cmd:first each cmd from usnorm where (first each cmd)~'(last each cmd); | ||
|
|
||
| h(".u.upd";`usage;value flip select from usnorm); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we stick to the same coding convention everywhere?
h("functionname";args)
or
h(`functionname;args)
| dict:.j.k x; | ||
| k:key dict; | ||
| // Change the Type of `tabname`instruments`grouping to chars | ||
| dict:@[dict;`tablename`instruments`grouping`columns inter k;{`$x}]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be refactored
q)`a`b!("Hello";"World")
a| "Hello"
b| "World"
q)d:`a`b!("Hello";"World")
q)@[d;`a`b;`$]
a| Hello
b| World
q)@[d;`a;`$]
a| `Hello
b| "World"
| dict:@[dict;`tablename`instruments`grouping`columns inter k;{`$x}]; | |
| dict:@[dict;`tablename`instruments`grouping`columns inter k;`$]; |
| // Change the Type of `tabname`instruments`grouping to chars | ||
| dict:@[dict;`tablename`instruments`grouping`columns inter k;{`$x}]; | ||
| // Change the Type of `start/end time to timestamps (altering T -> D and - -> . if applicable) | ||
| dict:@[dict;`starttime`endtime inter k;{x:ssr[x;"T";"D"];x:ssr[x;"-";"."];value x}]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can potentially use the following parsing (but please check with a concrete example from what you get from the Jason file)
q)"P"$"2023-05-30T12:25:26.633"
2023.05.30D12:25:26.633000000
| // retrieve aggregations | ||
| if[`aggregations in k;dict[`aggregations]:value dict[`aggregations]]; | ||
| // Convert timebar | ||
| if[`timebar in k;dict[`timebar]:@[value dict[`timebar];1+til 2;{:`$x}]]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be refactored
| if[`timebar in k;dict[`timebar]:@[value dict[`timebar];1+til 2;{:`$x}]]; | |
| if[`timebar in k;dict[`timebar]:@[value dict[`timebar];1+til 2;`$]]; |
|
|
||
| filterskey:{[filtersstrings] | ||
| likelist:ss[filtersstrings;"like"]; | ||
| if[0=count likelist;value filtersstrings]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand what we are trying to achieve here. Can you please elaborate
This TorQ enhancement adds a number of new processes to manage the flow of query data. The new process types include the queryfeed, querytp, queryrdb, queryhdb, and querygateway. All interprocess communication (IPC) messages are routed through the queryfeed and processed by the query sub-stack. This new feature provides significant benefits for anyone monitoring or supporting the system – allowing him to identify the performance, failure or memory usage of any query made to the system. This realtime and historical data exists within a usage table that may itself be queried on the backend or accessed through a frontend data visualisation tool, as we have illustrated below.