-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: Make streamer test to use concurrent metric groups #123
base: master
Are you sure you want to change the base?
Conversation
Related-To: VLCLJ-2375 Signed-off-by: shubham kumar <[email protected]>
Related-To: VLCLJ-2375 Signed-off-by: shubham kumar <[email protected]>
EXPECT_EQ(ZE_RESULT_SUCCESS, | ||
zetDeviceGetConcurrentMetricGroupsExp( | ||
device, metricGroupHandleList.size(), | ||
metricGroupHandleList.data(), nullptr, &concurrentGroupCount)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how do you know what size to allocate for metricGroupHandleList ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its coming as a reference to get_concurrent_metric_group method right. So its already appropriately allocated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok its defined by metricGroupHandleList.size()
matchedGroupsInfo.emplace_back( | ||
metricGroupHandle, metricGroupProp.name, metricGroupProp.description, | ||
metricGroupProp.domain, metricGroupProp.metricCount); | ||
} | ||
|
||
return matchedGroupsInfo; | ||
concurrentMetricGroupHandles = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the rationale to change existing tests to use getConcurrentMetricGroups() instead of creating a new test? I think the value of this API will be when we have platforms with multiple metrics domains (or metrics sources) and effectively confirm the groups can be collected concurrently by opening different streamers (one per group) and/or tracers (with many groups )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea is not to test the functionality of getConcurrentMetricGroups. Currently in all streamer tests we are iterating over all metric groups. This was resulting in timeouts on Elmo runs. Hence the idea here is to somehow reduce the run-time by reducing the metric groups that the tests run on. Santhosh had suggested we can just run for 1 metric group from each source, which should be enough for all the tests. This new implementation does exactly this. It takes the first concurrent group returned by the getConcurrentMetricGroups API and only uses that for the tests. This today is expected to have 1 from OA and 1 from IP sampling.
uint32_t metricGroupCountInConcurrentGroup = countPerConcurrentGroup[0];
for (uint32_t i = 0; i < metricGroupCountInConcurrentGroup; i++) {
concurrentMetricGroupList.push_back(metricGroupHandleList[i]);
}
return concurrentMetricGroupList;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect, makes sense. Just a few comments:
- Leave some comments in the code to make your explanation easy to find. No need to mention Elmo, but the idea.
- There is an env var to reduce the % of tests used. How does it impact here?
Related-To: VLCLJ-2375