Not That Smart
AI should serve, not compete with, human intelligence
Technologists and sci fi authors alike foresee a future when artificial intelligence exceeds that of humans, and robots and other autonomous smart machines do ever more of our thinking for us. Long-time AI researcher Ben Shneiderman decisively “rejects the idea that autonomous machines can exceed or replace any meaningful notion of human intelligence, creativity, and responsibility.” Shneiderman advances an empowering vision of human-centered AI, in which our intelligent machines are always designed to advance human agency, rather than replace it. He challenges scientists and engineers to recognize the early danger signs of autonomous AI—from biased algorithms to the Boeing 737-MAX crashes—and adopt a more empirical and accountable approach to AI design—one that starts with the needs of the human user, and seeks to strengthen the social relations and activities that make our lives most rewarding.
Shneiderman offers recommendations to guide human-centered AI design and policy. How to assure that this approach is adopted across the rapidly expanding domain of AI innovation? Leila Doty and Lauren Sarkesian offer up federal technology procurement as a powerful tool for advancing standards that can assure that AI supports rather than undermines human well-being. The Trump Administration actually began to lay out goals for responsible AI innovation. Doty and Sarkesian challenge the Biden Administration to translate these goals into procurement standards that can incentivize their broad adoption by industry and steer innovation toward human-centered AI.
Photo by Possessed Photography.