Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/guide/guidance.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ sociolinguists, and cultural anthropologists, as well as with members of the
populations on which technology will be deployed.

A single model, for example, the toxicity model that we leverage in the
[example colab](https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_Example_Colab),
[example colab](../../tutorials/Fairness_Indicators_Example_Colab),
can be used in many different contexts. A toxicity model deployed on a website
to filter offensive comments, for example, is a very different use case than the
model being deployed in an example web UI where users can type in a sentence and
Expand Down
6 changes: 3 additions & 3 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,15 +53,15 @@ options {

<div class="grid cards" markdown>

- ![asdf](https://www.tensorflow.org/static/responsible_ai/fairness_indicators/images/mlpracticum_480.png)
- ![ML Practicum: Fairness in Perspective API using Fairness Indicators](https://www.tensorflow.org/static/responsible_ai/fairness_indicators/images/mlpracticum_480.png)

### [ML Practicum: Fairness in Perspective API using Fairness Indicators](https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body)

---

[Try the Case Study](https://developers.google.com/machine-learning/practica/fairness-indicators?utm_source=github&utm_medium=github&utm_campaign=fi-practicum&utm_term=&utm_content=repo-body)

- ![Fairness Indicators on the TensorFlow blog](../images/tf_full_color_primary_icon.svg)
- ![Fairness Indicators on the TensorFlow blog](images/tf_full_color_primary_icon.svg)

### [Fairness Indicators on the TensorFlow blog](https://blog.tensorflow.org/2019/12/fairness-indicators-fair-ML-systems.html)

Expand All @@ -83,7 +83,7 @@ options {

[Read on Google AI blog](https://ai.googleblog.com/2019/12/fairness-indicators-scalable.html)

- ![type:video](https://www.youtube.com/watch?v=6CwzDoE8J4M)
- <iframe width="560" height="315" src="https://www.youtube.com/embed/6CwzDoE8J4M?si=gIL2KHdj96_SxdVH" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

### [Fairness Indicators at Google I/O](https://www.youtube.com/watch?v=6CwzDoE8J4M)

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/Fairness_Indicators_Example_Colab.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -605,7 +605,7 @@
"source": [
"With this particular dataset and task, systematically higher false positive and false negative rates for certain identities can lead to negative consequences. For example, in a content moderation system, a higher-than-overall false positive rate for a certain group can lead to those voices being silenced. Thus, it is important to regularly evaluate these types of criteria as you develop and improve models, and utilize tools such as Fairness Indicators, TFDV, and WIT to help illuminate potential problems. Once you've identified fairness issues, you can experiment with new data sources, data balancing, or other techniques to improve performance on underperforming groups.\n",
"\n",
"See [here](https://tensorflow.org/responsible_ai/fairness_indicators/guide/guidance) for more information and guidance on how to use Fairness Indicators.\n"
"See [here](../../guide/guidance) for more information and guidance on how to use Fairness Indicators.\n"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/Fairness_Indicators_Pandas_Case_Study.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -386,7 +386,7 @@
"## Conclusion\n",
"Within this case study we imported a dataset into a Pandas DataFrame that we then analyzed with Fairness Indicators. Understanding the results of your model and underlying data is an important step in ensuring your model doesn't reflect harmful bias. In the context of this case study we examined the the LSAC dataset and how predictions from this data could be impacted by a students race. The concept of “what is unfair and what is fair have been introduced in multiple disciplines for well over 50 years, including in education, hiring, and machine learning.”\u003csup\u003e1\u003c/sup\u003e Fairness Indicator is a tool to help mitigate fairness concerns in your machine learning model.\n",
"\n",
"For more information on using Fairness Indicators and resources to learn more about fairness concerns see [here](https://www.tensorflow.org/responsible_ai/fairness_indicators/guide).\n",
"For more information on using Fairness Indicators and resources to learn more about fairness concerns see [here](../../).\n",
"\n",
"---\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@
"id": "-DQoReGDeN16"
},
"source": [
"This notebook demonstrates an easy way to create and optimize constrained problems using the TFCO library. This method can be useful in improving models when we find that they’re not performing equally well across different slices of our data, which we can identify using [Fairness Indicators](https://www.tensorflow.org/responsible_ai/fairness_indicators/guide). The second of Google’s AI principles states that our technology should avoid creating or reinforcing unfair bias, and we believe this technique can help improve model fairness in some situations. In particular, this notebook will:\n",
"This notebook demonstrates an easy way to create and optimize constrained problems using the TFCO library. This method can be useful in improving models when we find that they’re not performing equally well across different slices of our data, which we can identify using [Fairness Indicators](../../). The second of Google’s AI principles states that our technology should avoid creating or reinforcing unfair bias, and we believe this technique can help improve model fairness in some situations. In particular, this notebook will:\n",
"\n",
"\n",
"* Train a simple, *unconstrained* neural network model to detect a person's smile in images using [`tf.keras`](https://www.tensorflow.org/guide/keras) and the large-scale CelebFaces Attributes ([CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)) dataset.\n",
Expand Down
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ repo_url: https://github.com/tensorflow/fairness-indicators

theme:
name: material
logo: images/tf_full_color_primary_icon.svg
palette:
# Palette toggle for automatic mode
- media: "(prefers-color-scheme)"
Expand Down
1 change: 0 additions & 1 deletion requirements-docs.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
mkdocs
mkdocs-material
mkdocs-jupyter
mkdocs-video
Loading