还有 0 页未读 ,您可以 继续阅读 或 下载文档
1、本文档共计 0 页,下载后文档不带水印,支持完整阅读内容或进行编辑。
2、当您付费下载文档后,您只拥有了使用权限,并不意味着购买了版权,文档只能用于自身使用,不得用于其他商业用途(如 [转卖]进行直接盈利或[编辑后售卖]进行间接盈利)。
3、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。
4、如文档内容存在违规,或者侵犯商业秘密、侵犯著作权等,请点击“违规举报”。
In collaboration withMITSloanBIG IDEASBCGManagement ReviewRESEARCH REPORTJune 2023Building RobustRAl Programsas Third-PartyAl Tools Proliferateby Elizabeth M.Renieris,David Kiron,and Steven MillsAUTHORSElizabeth M.Renieris is guest editor for theDavid Kiron is an editorial director at MITSloan Management Review and coauthor oftheBig Idea program,a senior researchassociatebook Workforce Ecosystems:Reaching Strategicat Oxford's Institute for Ethics in Al,aGoals With People,Partners,and Technologysenior fellow at the Centre for International(MIT Press,2023).Governance Innovation,and author of BeyondSteven Mills is a managing director andData:Reclaiming Human Rights at the Dawn ofpartner at BCG,where he serves as the chiefthe Metaverse (MIT Press,2023).AI ethics officer.CONTRIBUTORSJeanne Bickford,Todd Fitz,Kevin Foley,Andrea Gao,Carolyn Ann Geason-Beissel,Abhishek Gupta,Hari Kumar,Michele Lee DeFilippo,Tad Roselund,Allison Ryder,Sean Singer,and Peter StruttThe research and analysis for this report was conducted under the direction of the authors as part ofan MIT Sloan Management Review research initiative in collaboration with and sponsored by BostonConsulting Group.To cite this report,please use:Elizabeth M.Renieris,David Kiron,and Steven Mills,"Building Robust RAI Programs as Third-PartyAI Tools Proliferate,"MIT Sloan Management Review and Boston Consulting Group,June 2023.Copyright Massachusetts Institute of Technology,2023.All rights reservedREPRINT #65103CONTENTS1Introduction3A Growing Gap Between RAI Leadersand Non-Leaders4Third-Party Al Risks on the Rise5Regulations Raise the Stakes6Now Is the Time to Double Down on RAI9ConclusionThe risks and failures of Al systems are more palpableand numerous than ever,but organizations are at riskof falling behind.IntroductionIn just a few short months since its release,OpenAl's ChatGPT tool has cata-pulted the capabilities,as well as the ethical challenges and failures,of artificialintelligence into the spotlight.Countless examples have emerged of the chatbotfabricating stories,including falsely accusing a law professor of sexual harass-ment and implicating an Australian mayor in a fake bribery scandal,leading tothe first lawsuit against an AI chatbot for defamation.In April,Samsung madeheadlines when three of its employees accidentally leaked confidential companyinformation,including internal meeting notes and source code,by inputting itinto ChatGPT.That news prompted many companies,such as JPMorgan andVerizon,to block access to AI chatbots from corporate systems.In fact,nearlyhalf of the companies polled in a recent Bloomberg survey reported that theyare actively working on policies for employee chatbot use,suggesting that asignificant share of businesses were caught off guard and were unprepared forthese developments.Indeed,the fast pace of AI advancements is making it harder to use AI respon-sibly and is putting pressure on responsible AI(RAI)programs to keep up.Forexample,companies'growing dependence on a burgeoning supply ofthird-partyAI tools,along with the rapid adoption of generative AI-algorithms(such asChatGPT,Dall-E 2,and Midjoumey)that use training data to generate realisticor seemingly factual text,images,or audio-is exposing them to new com-mercial,legal,and reputational risks that are difficult to track.In some cases,managers may lack any awareness about the use of such tools by employees orothers in the organization-a phenomenon known as shadow Al.s As StanfordLaw CodeX fellow Riyanka Roy Choudhury puts it,"RAI frameworks were notwritten to deal with the sudden,unimaginable number of risks that generativeAI tools are introducing."Organizational RAI programsare struggling to keep pace withtechnical advancements in AI.Building Robust RAI Programs as Third-Party Al Tools Proliferate1
请如实的对该文档进行评分-
-
-
-
-
0 分