Toward Practices for Human-Centered Machine Learning

2023-03-10

Chancellor, S. (n.d.). Toward Practices for Human-Centered Machine Learning. Retrieved March 16, 2023, from https://cacm.acm.org/magazines/2023/3/270209-toward-practices-for-human-centered-machine-learning/fulltext

“On the research side, new research points to worrisome trends of chasing metrics over more principled approaches and questionable gains in deep learning’s performance compared to linear models.” (Chancellor, p. 78) 🔤在研究方面,新的研究指出了令人担忧的趋势,即与线性模型相比,追求指标而不是更有原则的方法,以及深度学习在性能方面的进步值得怀疑。🔤

“In response, a new area called “human-centered machine learning” (HCML) promises to balance technological possibilities with human needs and values.” (Chancellor, p. 78) 🔤作为回应,一个名为“以人为中心的机器学习”(HCML) 的新领域有望平衡技术可能性与人类需求和价值观。🔤

“Put simply, HCML couples technical innovations in ML with social values like fairness, equality, and justice. The focus for HCML is broad; it includes fair and transparent algorithm design,39 human-in-the-loop decision-making,48 design for humanAI collaborations,4 and exploring the social impacts of ML.” (Chancellor, p. 78) 🔤简而言之,HCML 将 ML 中的技术创新与公平、平等和正义等社会价值观相结合。 HCML 的关注范围很广;它包括公平和透明的算法设计、39 人在回路中的决策、48 人类 AI 协作设计,4 以及探索 ML 的社会影响。🔤

“However, there are no unifying guidelines on what “human-centered” means, nor how HCML research and practice should be conducted” (Chancellor, p. 78) 🔤然而,对于“以人为本”的含义以及应该如何进行 HCML 研究和实践,并没有统一的指导方针🔤

“This article proposes practices for human-centered machine learning, as social values are as key to ML’s success as its technical innovations.” (Chancellor, p. 78) 🔤本文提出了以人为本的机器学习的实践,因为社会价值观与其技术创新一样是 ML 成功的关键。🔤

“The most popular of these values is “fairness” and the closely related concept of “justice.” Other scholars focus on methodological plurality, often drawing from qualitative methods (such as interviews and user observation) in HCI, the social sciences, and the humanities.” (Chancellor, p. 80) 🔤这些价值观中最受欢迎的是“公平”和与之密切相关的“正义”概念。其他学者关注方法论的多样性,通常借鉴人机交互、社会科学和人文学科中的定性方法(如访谈和用户观察)。🔤

“But to me, these definitions always fell flat. In focusing on procedures or values, these definitions missed a key part of the whole concept—what it meant to be “human-centered.”” (Chancellor, p. 80) 🔤但对我来说,这些定义总是站不住脚。在关注程序或价值观时,这些定义遗漏了整个概念的关键部分——“以人为本”的含义。🔤

“By adapting these lessons learned and discussing computing’s responsibility for society through a sociotechnical lens,43 this article provides a beginning set of HCML practices to help scholars conduct research that is rigorous without losing sight of values and pro-social goals.” (Chancellor, p. 80) 🔤通过吸取这些经验教训并通过社会技术视角讨论计算对社会的责任,43 本文提供了一套 HCML 实践的起始集,以帮助学者们在不忽视价值观和亲社会目标的情况下进行严谨的研究。🔤

“Avoiding technological solutionism is more nuanced than just weighing the pros and cons of a potential ML solution. This practice asks us to evaluate whether ML is a good solution and if we have correctly articulated the problem that ML says it solves in the first place.” (Chancellor, p. 81) 🔤避免技术解决主义比仅仅权衡潜在 ML 解决方案的利弊更为微妙。这种做法要求我们评估 ML 是否是一个好的解决方案,以及我们是否正确地阐明了 ML 所说的它首先解决的问题。🔤

“Human-centered machine learning is a set of practices for building, evaluating, deploying, and critiquing ML systems that balances technical innovation with an equivalent focus on human and social concerns.” (Chancellor, p. 81) 🔤以人为本的机器学习是一套用于构建、评估、部署和批判 ML 系统的实践,它平衡了技术创新与对人类和社会问题的同等关注。🔤

“HCML is a set of practices that guide better decision-making throughout the whole ML pipeline of problem brainstorming, development, and deployment.” (Chancellor, p. 81) 🔤HCML 是一组实践,可在问题集思广益、开发和部署的整个 ML 管道中指导更好的决策制定。🔤

“None of this is grounded in immutable truth about the world. Criminality is determined by the legal system under which someone operates, which differs across governments. There is enormous variance in who is arrested for potentially committing crimes, who may have a police photograph taken, and cultural as well as legal considerations of who is eventually found guilty of a crime.” (Chancellor, p. 82) 🔤所有这些都不是基于关于世界的不变真理。犯罪行为取决于某人开展活动所依据的法律体系,各国政府的法律体系各不相同。谁因潜在犯罪而被捕,谁可能被警方拍照,以及谁最终被判有罪的文化和法律考虑因素存在巨大差异。🔤

“This practice asks us to ask ourselves: Who are we as researchers, and what perspectives do we bring to problems that influence our choices?” (Chancellor, p. 82) 🔤这种做法要求我们问自己:作为研究人员,我们是谁,我们对影响我们选择的问题有什么看法?🔤

“ML engineers are not the experts of a target population, and by assuming we know better, we risk causing more harm than good. At a minimum, we should go ask our target populations what they want and need.” (Chancellor, p. 82) 🔤ML 工程师不是目标人群的专家,假设我们知道得更多,我们可能会造成弊大于利。至少,我们应该去询问我们的目标人群他们想要什么和需要什么。🔤

“We have responsibilities to our systems and the sociotechnical impacts of their design and deployment—the process is not simply data-design-deploy-done.” (Chancellor, p. 82) 🔤我们对我们的系统及其设计和部署的社会技术影响负责——这个过程不仅仅是数据-设计-部署-完成。🔤

“Acknowledge ML problem statements take positions. At the point where ML has been chosen as a solution, the next step is formalizing the problem statement that HCML is going to solve.” (Chancellor, p. 82) 🔤确认 ML 问题陈述的立场。在选择 ML 作为解决方案时,下一步是正式确定 HCML 将要解决的问题陈述。🔤

“I believe a problem statement for an HCML solution is a position statement that reflects peoples’ perspectives and assumptions.” (Chancellor, p. 82) 🔤我相信 HCML 解决方案的问题陈述是反映人们观点和假设的立场陈述。🔤

“To avoid these pitfalls and reveal the true diversity of interactions between machine learning and people, HCML must move beyond just an “interaction mode” of understanding users.5” (Chancellor, p. 83) 🔤为了避免这些陷阱并揭示机器学习与人之间交互的真正多样性,HCML 必须超越理解用户的“交互模式”。 5🔤

“We must consider multiple ways of engaging with technology—directed and interactive, as well as ambient and nonuse—as important questions to engage with for understanding ML systems and broader implications for HCML. Clarifying who is involved can help manage who should be involved in the development life cycle of a system.” (Chancellor, p. 83) 🔤我们必须考虑多种参与技术的方式——定向和交互,以及环境和非使用——作为理解 ML 系统和 HCML 更广泛影响的重要问题。明确参与人员可以帮助管理谁应该参与系统的开发生命周期。🔤

“For example, to embed the practice of moving beyond HCI notions of the user, research could start by using stakeholder analysis and needs assessment to guide value-sensitive algorithm design.48” (Chancellor, p. 84) 🔤例如,为了嵌入超越用户 HCI 概念的实践,研究可以从使用利益相关者分析和需求评估开始,以指导对价值敏感的算法设计。 48🔤

“In addition to the evaluation of metrics for failures, it is becoming obvious that models produce other kinds of failures. These failures are not readily diagnosed with mathematical techniques like evaluating performance. Instead, these failures are embodied within the social and political world that the technical model inhabits.” (Chancellor, p. 84) 🔤除了对失败指标的评估之外,很明显模型会产生其他类型的失败。这些故障不容易用评估性能等数学技术诊断出来。相反,这些失败体现在技术模型所处的社会和政治世界中。🔤