Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REFERECE] google - Fairness: Types of Bias #4

Open
ohahohah opened this issue Sep 1, 2021 · 0 comments
Open

[REFERECE] google - Fairness: Types of Bias #4

ohahohah opened this issue Sep 1, 2021 · 0 comments

Comments

@ohahohah
Copy link
Contributor

ohahohah commented Sep 1, 2021

추가하고 싶은 레퍼런스 정보를 적어주세요.

  1. 제목 / 작성자(조직)
    Fairness: Types of Bias | Machine Learning Crash Course / Google

  2. 원본 링크
    https://developers.google.com/machine-learning/crash-course/fairness/types-of-bias

  3. 레퍼런스 설명(3줄 이내)
    머신러닝 모델에서의 인간이 할 수 있는 편향 bias 유형
    Machine learning models are not inherently objective. Engineers train models by feeding them a data set of training examples, and human involvement in the provision and curation of this data can make a model's predictions susceptible to bias.
    When building models, it's important to be aware of common human biases that can manifest in your data, so you can take proactive steps to mitigate their effects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant