Qgelm

Home :: FAT ML

Originalartikel

Backup

<html> <div class=„homepage-hero container“ readability=„38“> <p class=„homepage-hero__subtitle“>Fairness, Accountability, and Transparency in Machine Learning</p> </div><div class=„homepage-description container row col-sm-8 col-sm-offset-2“ readability=„54“> <h3>Bringing together a growing community of researchers and practitioners concerned with fairness, accountability, and transparency in machine learning</h3> <p>The past few years have seen growing recognition that machine learning raises novel challenges for ensuring non-discrimination, due process, and understandability in decision-making. In particular, policymakers, regulators, and advocates have expressed fears about the potentially discriminatory impact of machine learning, with many calling for further technical research into the dangers of inadvertently encoding bias into automated decisions.</p> <p>At the same time, there is increasing alarm that the complexity of machine learning may reduce the justification for consequential decisions to &#8220;the algorithm made me do it.&#8221;</p> <p>The annual event provides researchers with a venue to explore how to characterize and address these issues with computationally rigorous methods.</p> </div> </html>

Cookies helfen bei der Bereitstellung von Inhalten. Diese Website verwendet Cookies. Mit der Nutzung der Website erklären Sie sich damit einverstanden, dass Cookies auf Ihrem Computer gespeichert werden. Außerdem bestätigen Sie, dass Sie unsere Datenschutzerklärung gelesen und verstanden haben. Wenn Sie nicht einverstanden sind, verlassen Sie die Website.Weitere Information