6/11/2023 0 Comments Dair timnit![]() Not too long ago, Timnit resigned from Google, an episode erroneously framed by every major press outlet that covered her resignation as a firing. A brief list of her many public laurels: One of the World’s 50 Greatest Leaders ( Fortune ) one of the ten people who shaped science in 2021 ( Nature ) one of the most influential people in the world ( TIME ). In one recent, typical example of her ire, after a blog post she didn’t like referenced Sam Altman:īut if Timnit’s charge is somehow none of the ‘right people’ are getting enough “fawning” press, with the category of ‘right people’ presumably including Timnit herself, it would be difficult to comprehend, as there is no single figure in AI who has received as much “fawning” press as Timnit. These “white men” (her relentless, racist framing ) were getting attention, and that attention belonged to Timnit. On closer examination, Timnit’s real issue with the letter seems mostly a matter of her enormous ego. Her stated reason: the letter “fearmongered,” which is an incredible claim given her recent implication AGI is rooted in literally genocidal aspirations. But Timnit - like Eliezer, who she endlessly attacks, and persistently racializes - was not happy. Our story begins with Timnit’s strange reaction to the Future of Life Institute’s open letter demanding a moratorium on AI training, which is ostensibly what both Timnit and Eliezer want. ![]() In terms of ability to shape public sentiment, our mainstream press is still king, and there is no “AI ethicist” the press loves more than former Googler Timnit Gebru, who believes AGI is a white supremacist fantasy.Īs Timnit has quickly come to dominate most media perspective on the subject of AI safety, it’s unfortunately necessary to parse her work a little more closely. What follows is my summary of this important public figure’s recent thinking, which I intend to grant the same respect that she has granted the men and women actually working on AI. In keeping with the laws of Clown World, he is therefore unsurprisingly not the most influential anti-AI zealot in the crowded nascent cottage industry of anti-AI zealotry. Recent hysteria aside, he’s contributed a great deal to the field (if you can call it that) of AI safety. Partly, as I covered at the top of the week, the most virulent front is led by Bay Area rationalists, chief among them Eliezer Yudkowsky, who last week suggested provocation of a nuclear war is preferable to further development in the field, and any acceptable future must include the legal bombing of “rogue data centers.” But while the blackest blackpill of the bunch, Eliezer has written more cogently about AI risk, and for longer, than almost any other thinker, living or dead. Making space for “AI ethicists.” Given the growing hostility of our anti-tech press, with its inevitable catalyzation of broad anti-tech popular sentiment, it’s no surprise the field of artificial intelligence has already attracted an army of critics.
0 Comments
Leave a Reply. |