You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CodeCarbon's high-level functions seem to become a computational bottleneck when trying to isolatedly monitor a single inexpensive function inside a loop with many iterations
For instance, this is common in streaming/online ML tasks:
Adding the start_task and stop_task calls in the snippet above increases the execution runtime by a factor of 1000 (from a few seconds, to several days). Using lower-level or private methods like flush() and _do_measurements() also adds this overhead.
Is there any way to do this efficiently, even if it's hacky or more manual?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi to the community,
CodeCarbon's high-level functions seem to become a computational bottleneck when trying to isolatedly monitor a single inexpensive function inside a loop with many iterations
For instance, this is common in streaming/online ML tasks:
Adding the
start_taskandstop_taskcalls in the snippet above increases the execution runtime by a factor of 1000 (from a few seconds, to several days). Using lower-level or private methods likeflush()and_do_measurements()also adds this overhead.Is there any way to do this efficiently, even if it's hacky or more manual?
Cheers,
Beta Was this translation helpful? Give feedback.
All reactions