Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add verbose logs for node/plugin scores even ranged in low levels #103515

Merged
merged 2 commits into from Sep 8, 2021

Conversation

muma378
Copy link
Contributor

@muma378 muma378 commented Jul 6, 2021

What type of PR is this?

/kind feature

What this PR does / why we need it:

No scheduling info captured if the logging level less than 10. It would be a little help to find what happend if the top N highest node/plugin scores dumped at a lower level.

@damemi made some improvement in #99411 , however it is found ~50% performace drop in large cluster.

As @Huang-Wei suggests in #101820,we use a dynamical logging strategy to dump depending on the logging level:

  • for logging level ranged in [4, 6), dump the scores for topM nodes/plugins. And make it as performant as possible.
  • for logging level ranged in [6, 10), dump the scores for topN nodes, and probably show all plugins
  • for logging level >= 10, dump the scores for all nodes/plugins.

The output looks like this with -v=4:

I0728 15:09:41.627395   98057 scheduling_queue.go:872] "About to try and schedule pod" pod="scheduler-extenderbd2630b2-df6f-4a45-abe0-7e80e26ac76d/extender-test-pod"
I0728 15:09:41.627418   98057 scheduler.go:516] "Attempting to schedule pod" pod="scheduler-extenderbd2630b2-df6f-4a45-abe0-7e80e26ac76d/extender-test-pod"
E0728 15:09:41.627519   98057 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"scheduler-extenderbd2630b2-df6f-4a45-abe0-7e80e26ac76d\" not found" namespace="scheduler-extenderbd2630b2-df6f-4a45-
abe0-7e80e26ac76d"
E0728 15:09:41.630656   98057 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"scheduler-extenderbd2630b2-df6f-4a45-abe0-7e80e26ac76d\" not found" namespace="scheduler-extenderbd2630b2-df6f-4a45-
abe0-7e80e26ac76d"
I0728 15:09:41.630850   98057 generic_scheduler.go:597] "Top 3 plugins for pod on node" pod="scheduler-extenderbd2630b2-df6f-4a45-abe0-7e80e26ac76d/extender-test-pod" node="machine3" score=1000400 plugins=[{Name:NodePreferAvoidPods Score:
1000000} {Name:PodTopologySpread Score:200} {Name:TaintToleration Score:100}]
I0728 15:09:41.630875   98057 generic_scheduler.go:597] "Top 3 plugins for pod on node" pod="scheduler-extenderbd2630b2-df6f-4a45-abe0-7e80e26ac76d/extender-test-pod" node="machine2" score=1000400 plugins=[{Name:NodePreferAvoidPods Score:
1000000} {Name:PodTopologySpread Score:200} {Name:TaintToleration Score:100}]

The output looks like this with -v=6:

I0728 15:01:01.121359   96269 scheduler.go:516] "Attempting to schedule pod" pod="scheduler-extender87c5ecb8-b5bf-4723-b247-3905865f6a5d/extender-test-pod"
E0728 15:01:01.121444   96269 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"scheduler-extender87c5ecb8-b5bf-4723-b247-3905865f6a5d\" not found" namespace="scheduler-extender87c5ecb8-b5bf-4723-
b247-3905865f6a5d"
E0728 15:01:01.123943   96269 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"scheduler-extender87c5ecb8-b5bf-4723-b247-3905865f6a5d\" not found" namespace="scheduler-extender87c5ecb8-b5bf-4723-
b247-3905865f6a5d"
I0728 15:01:01.124131   96269 generic_scheduler.go:597] "Top 10 plugins for pod on node" pod="scheduler-extender87c5ecb8-b5bf-4723-b247-3905865f6a5d/extender-test-pod" node="machine2" score=1000400 plugins=[{Name:NodePreferAvoidPods Score
:1000000} {Name:PodTopologySpread Score:200} {Name:NodeResourcesBalancedAllocation Score:100} {Name:TaintToleration Score:100} {Name:InterPodAffinity Score:0} {Name:NodeResourcesLeastAllocated Score:0} {Name:NodeAffinity Score:0} {Name:Im
ageLocality Score:0}]
I0728 15:01:01.124156   96269 generic_scheduler.go:597] "Top 10 plugins for pod on node" pod="scheduler-extender87c5ecb8-b5bf-4723-b247-3905865f6a5d/extender-test-pod" node="machine3" score=1000400 plugins=[{Name:NodePreferAvoidPods Score
:1000000} {Name:PodTopologySpread Score:200} {Name:NodeResourcesBalancedAllocation Score:100} {Name:TaintToleration Score:100} {Name:InterPodAffinity Score:0} {Name:NodeResourcesLeastAllocated Score:0} {Name:NodeAffinity Score:0} {Name:Im
ageLocality Score:0}]

Which issue(s) this PR fixes:

Fixes #101820

Special notes for your reviewer:

Performance was evaluated and compared with the tool k8s-sched-perf-stat:

I did the performance test with the following 4 scenarios(2 branches x 2 logging levels):

v=1 v=4
master perf-v1-clean.txt perf-v4-clean.txt
feature perf-v1-verbose.txt perf-v4-verbose.txt

with the command:

make test-integration WHAT=./test/integration/scheduler_perf KUBE_TEST_VMODULE=\"$KUBE_TEST_VMODULE\" KUBE_TEST_ARGS="-alsologtostderr=false -logtostderr=false -benchtime=1ns -bench=BenchmarkPerfScheduling/SchedulingBasic/5000Nodes/5000InitPods/1000PodsToSchedule -data-items-dir /tmp"

and the diff is evelauted:

++ bash -c 'go run *.go /Users/xiaoyang/go/src/k8s.io/kubernetes/perf-v1-clean.txt /Users/xiaoyang/go/src/k8s.io/kubernetes/perf-v1-verbose.txt'
+--------------------------+---------------------------+----------+--------+---------+---------+----------+
|          METRIC          |           GROUP           | QUANTILE |  UNIT  |   OLD   |   NEW   |   DIFF   |
+--------------------------+---------------------------+----------+--------+---------+---------+----------+
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Average  | pods/s |  106.19 |  130.95 | +23.32%  |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc50   | pods/s |   92.11 |   98.62 | +7.07%   |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc90   | pods/s |  217.78 |  275.38 | +26.45%  |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc99   | pods/s |  229.00 |  275.38 | +20.25%  |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Average  | ms     | 1202.25 | 1237.40 | +2.92%   |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc50   | ms     |  348.70 |  786.11 | +125.44% |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc90   | ms     | 3358.05 | 2956.64 | -11.95%  |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc99   | ms     | 7474.71 | 6202.31 | -17.02%  |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Average  | ms     | 1360.00 | 1348.09 | -0.88%   |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc50   | ms     |  607.56 |  981.18 | +61.50%  |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc90   | ms     | 4041.90 | 3599.99 | -10.93%  |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc99   | ms     | 8367.33 | 6907.92 | -17.44%  |
+--------------------------+---------------------------+----------+--------+---------+---------+----------+
++ bash -c 'go run *.go /Users/xiaoyang/go/src/k8s.io/kubernetes/perf-v4-clean.txt /Users/xiaoyang/go/src/k8s.io/kubernetes/perf-v4-verbose.txt'
+--------------------------+---------------------------+----------+--------+----------+---------+----------+
|          METRIC          |           GROUP           | QUANTILE |  UNIT  |   OLD    |   NEW   |   DIFF   |
+--------------------------+---------------------------+----------+--------+----------+---------+----------+
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Average  | pods/s |   105.88 |  138.51 | +30.82%  |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc50   | pods/s |    80.25 |   97.62 | +21.65%  |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc90   | pods/s |   223.13 |  275.67 | +23.55%  |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc99   | pods/s |   234.75 |  277.22 | +18.09%  |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Average  | ms     |   715.89 |  865.39 | +20.88%  |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc50   | ms     |    47.56 |  386.74 | +713.23% |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc90   | ms     |  2282.13 | 2510.42 | +10.00%  |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc99   | ms     |  5320.34 | 5314.16 | -0.12%   |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Average  | ms     |  4508.74 | 1922.55 | -57.36%  |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc50   | ms     |  4688.92 | 1747.25 | -62.74%  |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc90   | ms     |  8867.54 | 3840.48 | -56.69%  |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc99   | ms     | 11172.94 | 6716.56 | -39.89%  |
+--------------------------+---------------------------+----------+--------+----------+---------+----------+
++ bash -c 'go run *.go /Users/xiaoyang/go/src/k8s.io/kubernetes/perf-v1-clean.txt /Users/xiaoyang/go/src/k8s.io/kubernetes/perf-v4-clean.txt'
+--------------------------+---------------------------+----------+--------+---------+----------+----------+
|          METRIC          |           GROUP           | QUANTILE |  UNIT  |   OLD   |   NEW    |   DIFF   |
+--------------------------+---------------------------+----------+--------+---------+----------+----------+
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Average  | pods/s |  106.19 |   105.88 | -0.29%   |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc50   | pods/s |   92.11 |    80.25 | -12.88%  |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc90   | pods/s |  217.78 |   223.13 | +2.46%   |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc99   | pods/s |  229.00 |   234.75 | +2.51%   |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Average  | ms     | 1202.25 |   715.89 | -40.45%  |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc50   | ms     |  348.70 |    47.56 | -86.36%  |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc90   | ms     | 3358.05 |  2282.13 | -32.04%  |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc99   | ms     | 7474.71 |  5320.34 | -28.82%  |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Average  | ms     | 1360.00 |  4508.74 | +231.53% |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc50   | ms     |  607.56 |  4688.92 | +671.77% |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc90   | ms     | 4041.90 |  8867.54 | +119.39% |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc99   | ms     | 8367.33 | 11172.94 | +33.53%  |
+--------------------------+---------------------------+----------+--------+---------+----------+----------+
++ bash -c 'go run *.go /Users/xiaoyang/go/src/k8s.io/kubernetes/perf-v1-verbose.txt /Users/xiaoyang/go/src/k8s.io/kubernetes/perf-v4-verbose.txt'
+--------------------------+---------------------------+----------+--------+---------+---------+---------+
|          METRIC          |           GROUP           | QUANTILE |  UNIT  |   OLD   |   NEW   |  DIFF   |
+--------------------------+---------------------------+----------+--------+---------+---------+---------+
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Average  | pods/s |  130.95 |  138.51 | +5.77%  |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc50   | pods/s |   98.62 |   97.62 | -1.01%  |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc90   | pods/s |  275.38 |  275.67 | +0.11%  |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc99   | pods/s |  275.38 |  277.22 | +0.67%  |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Average  | ms     | 1237.40 |  865.39 | -30.06% |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc50   | ms     |  786.11 |  386.74 | -50.80% |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc90   | ms     | 2956.64 | 2510.42 | -15.09% |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc99   | ms     | 6202.31 | 5314.16 | -14.32% |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Average  | ms     | 1348.09 | 1922.55 | +42.61% |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc50   | ms     |  981.18 | 1747.25 | +78.08% |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc90   | ms     | 3599.99 | 3840.48 | +6.68%  |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc99   | ms     | 6907.92 | 6716.56 | -2.77%  |
+--------------------------+---------------------------+----------+--------+---------+---------+---------+

Does this PR introduce a user-facing change?

kube-scheduler now logs node and plugin scoring  even though --v<10
- socres of the top 3 plugins in the top 3 nodes are dumped if --v=4,5
- socres of all plugins in the top 6 nodes are dumped if --v=6,7,8,9

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. release-note-none Denotes a PR that doesn't merit a release note. kind/feature Categorizes issue or PR as related to a new feature. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 6, 2021
@k8s-ci-robot
Copy link
Contributor

@muma378: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Jul 6, 2021
@k8s-ci-robot
Copy link
Contributor

Hi @muma378. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 6, 2021
@k8s-ci-robot k8s-ci-robot requested review from ahg-g and damemi July 6, 2021 12:55
@wzshiming
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jul 7, 2021
@muma378
Copy link
Contributor Author

muma378 commented Jul 7, 2021

/retest

@damemi
Copy link
Contributor

damemi commented Jul 7, 2021

@muma378 is this ready for review? If so please move it from "draft" to ready so we know it's not still a WIP :)

@muma378
Copy link
Contributor Author

muma378 commented Jul 9, 2021

@muma378 is this ready for review? If so please move it from "draft" to ready so we know it's not still a WIP :)

@damemi very thanks for your remind, but i‘m still strugging in running e2e-test in my local laptop to check if the logs were output correctly.
Do you have any idea to help me to get the kube-scheduler logs without building a new cluster? any suggest would be appreciate.

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed release-note-none Denotes a PR that doesn't merit a release note. labels Jul 28, 2021
@muma378 muma378 marked this pull request as ready for review July 28, 2021 07:15
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jul 28, 2021
@muma378 muma378 requested a review from yuzhiquan July 28, 2021 07:17
@muma378
Copy link
Contributor Author

muma378 commented Sep 1, 2021

@damemi @ahg-g PTAL

Copy link
Contributor

@damemi damemi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay on this @muma378. This looks pretty good to me, do I understand the performance results correctly that this improved the performance at all log levels? That's really interesting, is it due to the use of the heap do you think?

I think this is good to go, but will let @Huang-Wei do final lgtm since he caught the issue last time
/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: damemi, muma378

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 7, 2021
@wzshiming
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 8, 2021
@k8s-ci-robot k8s-ci-robot merged commit 3282d6c into kubernetes:master Sep 8, 2021
@k8s-ci-robot k8s-ci-robot added this to the v1.23 milestone Sep 8, 2021
@Huang-Wei
Copy link
Member

/lgtm

Well, I think this made this PR merged without thorough reviewing. @wzshiming please hold off the directive /lgtm if it needs further discussion.

After taking a glance, I can spot several issues:

  1. A consensus on the value M and N. This is not fully discussed yet.
  2. Further dig into the performance result. The test runs on v=1 and v=4, but actually they're the same if it's running in the context of integration framework.
    // Ensure we log at least level 4
    v := flag.Lookup("v").Value
    level, _ := strconv.Atoi(v.String())
    if level < 4 {
    v.Set("4")
    }

    I'd expect to see the result comparing before/after on v=4 and v=6.
  3. Further review on the code, like adding UTs and some logic nits.

@wzshiming
Copy link
Member

wzshiming commented Sep 8, 2021

@Huang-Wei
I am sorry, I didn't notice that the review was /approve, and /lgtm was added This only represents me personally.

@pacoxu
Copy link
Member

pacoxu commented Sep 8, 2021

I suggest reopening the issue and follow up on Wei's comments.

/cc @kerthcet @muma378

@k8s-ci-robot
Copy link
Contributor

@pacoxu: GitHub didn't allow me to request PR reviews from the following users: muma378.

Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

I suggest reopening the issue and follow up on Wei's comments.

/cc @kerthcet @muma378

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Huang-Wei
Copy link
Member

@damemi @ahg-g we may have to revert it and open a new PR for a thorough review, WDYT?

@damemi
Copy link
Contributor

damemi commented Sep 8, 2021

I didn't realize the performance tests didn't cover all the cases, sorry. We should try to at least get some measurements for that first. If there is still a big hit to performance then we should revert

@ahg-g
Copy link
Member

ahg-g commented Sep 8, 2021

+1 to reverting. The logic seems a bit ad hoc to me and not necessarily placed in the right pkg.

If we are serious about dumping internal scheduler state or decisions, we need to design a library for that, which includes the logic we currently have for dumping the cache:

@Huang-Wei Huang-Wei mentioned this pull request Sep 8, 2021
@Huang-Wei
Copy link
Member

@damemi I created #104849 to revert this PR.

@muma378 Apologize for the reverting. It doesn't we don't accept the changes. It's just we want to review it thoroughly. Feel free to open a new PR, taking the comments #103515 (comment) and #103515 (comment) into consideration.

@muma378
Copy link
Contributor Author

muma378 commented Sep 9, 2021

@damemi I created #104849 to revert this PR.

@muma378 Apologize for the reverting. It doesn't we don't accept the changes. It's just we want to review it thoroughly. Feel free to open a new PR, taking the comments #103515 (comment) and #103515 (comment) into consideration.

@Huang-Wei Never mind, it is absolutely ok with me. I feel sorry about the mistake on performance testing and will keep following the issue.

In terms of your and @ahg-g 's suggestion:

  • I will do more performance test with logging levels 5/6/9, and add UTs;
  • Is there anything I can do about the value choosing of M and N? (such as comparing the performace drop between different values?)
  • Add verbose logs for node/plugin scores even ranged in low levels #103515 (comment) sounds reasonable, however it may be a big work. Should I just open a new PR or start it in a more formal way (such as writting a design proposal)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

More debug logs to check scheduler spread algorithm
9 participants