Skip to content

Commit

Permalink
candidate fix for #49
Browse files Browse the repository at this point in the history
  • Loading branch information
pbiecek committed Aug 21, 2022
1 parent 2bd5db2 commit 6498abf
Show file tree
Hide file tree
Showing 3 changed files with 451 additions and 447 deletions.
70 changes: 35 additions & 35 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,35 +1,35 @@
Package: fairmodels
Type: Package
Title: Flexible Tool for Bias Detection, Visualization, and Mitigation
Version: 1.2.0
Authors@R:
c(person("Jakub", "Wiśniewski", role = c("aut", "cre"),
email = "[email protected]"),
person("Przemysław", "Biecek", role = c("aut"),
comment = c(ORCID = "0000-0001-8423-1823")))
Description: Measure fairness metrics in one place for many models. Check how big is model's bias towards different races, sex, nationalities etc. Use measures such as Statistical Parity, Equal odds to detect the discrimination against unprivileged groups. Visualize the bias using heatmap, radar plot, biplot, bar chart (and more!). There are various pre-processing and post-processing bias mitigation algorithms implemented. Package also supports calculating fairness metrics for regression models. Find more details in (Wiśniewski, Biecek (2021)) <arXiv:2104.00507>.
License: GPL-3
Encoding: UTF-8
LazyData: true
Depends: R (>= 3.5)
Imports:
DALEX,
ggplot2,
scales,
stats,
patchwork,
Suggests:
ranger,
gbm,
knitr,
rmarkdown,
covr,
testthat,
spelling,
ggdendro,
ggrepel,
RoxygenNote: 7.1.1.9001
VignetteBuilder: knitr
URL: https://fairmodels.drwhy.ai/
BugReports: https://github.com/ModelOriented/fairmodels/issues
Language: en-US
Package: fairmodels
Type: Package
Title: Flexible Tool for Bias Detection, Visualization, and Mitigation
Version: 1.2.1
Authors@R:
c(person("Jakub", "Wiśniewski", role = c("aut", "cre"),
email = "[email protected]"),
person("Przemysław", "Biecek", role = c("aut"),
comment = c(ORCID = "0000-0001-8423-1823")))
Description: Measure fairness metrics in one place for many models. Check how big is model's bias towards different races, sex, nationalities etc. Use measures such as Statistical Parity, Equal odds to detect the discrimination against unprivileged groups. Visualize the bias using heatmap, radar plot, biplot, bar chart (and more!). There are various pre-processing and post-processing bias mitigation algorithms implemented. Package also supports calculating fairness metrics for regression models. Find more details in (Wiśniewski, Biecek (2021)) <arXiv:2104.00507>.
License: GPL-3
Encoding: UTF-8
LazyData: true
Depends: R (>= 3.5)
Imports:
DALEX,
ggplot2,
scales,
stats,
patchwork,
Suggests:
ranger,
gbm,
knitr,
rmarkdown,
covr,
testthat,
spelling,
ggdendro,
ggrepel,
RoxygenNote: 7.1.1.9001
VignetteBuilder: knitr
URL: https://fairmodels.drwhy.ai/
BugReports: https://github.com/ModelOriented/fairmodels/issues
Language: en-US
4 changes: 4 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# fairmodels 1.2.1
* changed CITATION file (added reference to the RJournal)
* fix for https://github.com/ModelOriented/fairmodels/issues/49

# fairmodels 1.2.0
* Added filtering metrics when plotting and printing of `fairness_object`.
* Added ability to add custom measure function to print method of `fairness_object`
Expand Down
Loading

0 comments on commit 6498abf

Please sign in to comment.