Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Active ranking #394

Open
wants to merge 10 commits into
base: master
Choose a base branch
from
Open

Active ranking #394

wants to merge 10 commits into from

Conversation

YoelShoshan
Copy link
Collaborator

helper utilities to rank items based on either:

  1. A model that given items a,b predicts whether a should be ranked lower than b
  2. A model that given a list of items (e.g. 8 items) predicts their ranking

The user is expected to supply such callable, and the helper functions added in this PR will call it iteratively to predict the overall ranking of the entire provided set.

Comment on lines +1 to +4
import numpy as np
from typing import Callable, List, Any
from dataclasses import dataclass
from collections import defaultdict
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note the isort error.

Maybe you need to reinstall the pre-commit hooks? Something like pre-commit install when the repo is the current directory. If I right, you'll need to re-run the pre-commits afterwards to make sure it applies the changes :)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides that the Jenkins seems to pass (after a rebuilt)

Copy link
Collaborator

@mosheraboh mosheraboh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.
General comment, do you use this code to evaluate your model?
I wonder if it indeed should be under eval/metrics?
If it should, I suggest moving it to eval/metrics/lib which is used for lower-level metrics functionality.

raise ValueError(f"Unknown method: {method}")


if __name__ == "__main__":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider converting it to unittest.

return global_ranking


if __name__ == "__main__":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider converting it to unittest

def __init__(
self,
items: List[Any],
compare_pairwise_fn: Callable[[Any, Any], bool],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you share some details about the expected arguments of this function?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants