1 / 20

Festival of Ideas Swansea University Presented by Ryan Carrier, Executive Director ForHumanity

Artificial Intelligence and Automation Governance. Festival of Ideas Swansea University Presented by Ryan Carrier, Executive Director ForHumanity May 1, 2019. So What’s the Problem?. If you’re not concerned about AI Safety, you should be. Vastly more Risky than North Korea.

mshade
Download Presentation

Festival of Ideas Swansea University Presented by Ryan Carrier, Executive Director ForHumanity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence and Automation Governance Festival of Ideas Swansea University Presented by Ryan Carrier, Executive Director ForHumanity May 1, 2019

  2. So What’s the Problem? If you’re not concerned about AI Safety, you should be. Vastly more Risky than North Korea Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’ Stephen Hawking: AI will be 'either best or worst thing' for humanity 2

  3. OK, those are Smart Guys, what are they worried about? Non-zero probability downside risks • Elimination of 40%-80% of all jobs (Technological Unemployment) • Restriction or elimination of basic rights (e.g. Privacy) • Accelerating Income Inequality • Embedded Ethics, Bias (data and algo) and Control Issues • Substantial shift in what it means to be human and the society we live in (Transhumanism) • Cyber attacks, bad agents and overall security • Legal Interactions with AI based machines • Autonomous weaponry • Existential Risk – Terminator-style, that the machines will eliminate humans 3

  4. What is ForHumanity? • US Non-profit entity • Launched in 2016 • Mission statement To examine and analyze the downside risks associated with the ubiquitous advance of AI & Automation. Further, to engage in risk mitigation, where possible, to ensure the optimal outcome… ForHumanity 4

  5. Governance of Whom? Nation-States • Exist for the betterment of their own people, sometimes at the expense of others • Modes for changes • Elections • Changing laws/Lobbying • Civil Disobedience • Regime change • Often their AI development will be military/clandestine/deemed in the interest of National Security and not subject to public transparency or governance Corporations • Exist for the betterment of their shareholders, usually via profits • Have no other reason for existence (*this is key*) • Have been subject to governance via regulations and laws (often avoided) and… • Actual Governance and Third-party oversight and that is the model we want to talk about 5

  6. It’s a Simple Equation, Really For Corporations - If we can make safe and responsible Artificial Intelligence & Automation profitable whilst making dangerous and irresponsible AI & Automation costly, then all of humanity wins Suggested Solution… Independent Audit of AI Systems 6

  7. We’ve Done It Before Financial Accounting Standards Board (FASB) • Launched in 1973, the accounting industry came together to set accounting standards • Industry-wide effort, created Generally, Accepted Accounting Principles (GAAP) • SEC adopted GAAP accounting for all publicly listed companies in 1975 International Financial Reporting Standards (IFRS) • Launched in 1973 as International Accounting Standards Committee • Also an Industry-wide effort, but NOT Global • GAAP and IFRS standards are not full-aligned The world wasn’t truly global in 1973, but it is today. Both FASB and IFRS share the following characteristics • Third party oversight, accountability, governance • Adopted by regulators as law • Demanded by users of financial data 7

  8. Independent Audit of AI Systems • Industry-wide, non-profit entity • Coordinated with the Big 4 Audit/Assurance Firms (PwC, Deloitte, Ernst & Young and KPMG) • Attached to a consortium of leading universities • Partnered with industry associations and world leading standards bodies • Team of full-time individuals dedicated to sourcing “best-practices” from: • Academia • Industry • Lawmakers • Everyone and anyone, who has a perspective and value to add • Targetting “best-practice” consensus, but allowing for: • Speed-to-market • Genuine dissent • Managing the delicate balance between global “best-practices” and local morays • Facilitating a global, inclusive, transparent process to solicit binary (compliant/non-compliant) criteria across five areas: 8

  9. Core Audit Silos Cyber- Security Ethics Bias Privacy Trust 9

  10. Creating Audit/Assurance Standards 10

  11. Process • Dedicated Staff • Full-time positions • Holding office for 2 hours daily • Scheduled to coincide with global time zones • Core Audit Silo heads • Facilitate dialogue • Research new ideas/concepts/contributors • Suggest possible rules • Solicit specific feedback on proposed rules • Track and manage dissent, striving for broad consensus • Manage discussion for decorum • Present proposed rules to Board of Directors • Quarterly Update Process and Board of Directors votes • Board votes and new rules are released • Max 21 member Board consists of industry professionals and academics, it should be diversified with representation of protect classes (may not cover every single class) 11

  12. What Does a Good Audit Rule Look Like? 12

  13. Ethics • Examples of possible ethics audit rules • Does your company ascribe and abide by IEEE’s Ethically Aligned Design (EAD)? • Have all of your designers been educated on EAD? • Where you have made ethical choices, are you disclosing them? • Create guidelines for disclosure… 13

  14. Bias • Examples of possible bias audit rules • All training data with variables that are considered “protected” classes must conform • to acceptable population norms • All training data must be peer reviewed for compliance to acceptable norms • All algorithmic results impacting “protected” class must conform to acceptable • population norms 14

  15. Privacy • Examples of possible privacy rules • Are you GDPR compliant? If, so demonstrate… • Are consent forms in plain English? • Can you provide users with their data? • Do you have a breach notification protocol process? • Do you anonymization of records? • Do you store passwords in plain text? • Do you maintain end-to-end password encryption? • Do you have a policy to limit access to users of personal data? 15

  16. Trust • Examples of possible trust rules • Does the AI run infinitely? Or does a user launch it? Does a user close it, or when • a task is completed does it end? • Transparency/Explainability – can the model explain why it reached a conclusion? • Does your automated welding arm interact physically with a human being? Is it • mobile? • Can your AI or Automated machine be manually separated from a power source? 16

  17. Cybersecurity • Examples of possible cybersecurity rules • Do you maintain a user log for each AI? • Do you maintain a “change log” for each AI? • Should code be air-gapped for change? • Do you maintain a firewall? • Do you have an education program for your cybersecurity policies? • Do you maintain remote, air-gapped backups? 17

  18. Let’s Not Recreate the Wheel EU GDPR.ORG International Organization for Standardization (ISO) Where organizations are leading the way, leverage their expertise, work consultatively to adapt audit rules… 18

  19. Compliance/Licensing • Passing the Audit is good for 12 months • Annotated as e.g. “SAFEAI 2019” • So even as the rules change quarterly, companies have time to catch up as as new “best-practices” are implemented • Compliant companies may choose to license the SAFEAI brand • Rocker tags include Auditor/Services/Corporate/Product 19

  20. Merits of Independent Audit of AI Systems • Transparency • Fairer Markets • Increased Corporate Trust • Opacity (yes, the opposite of transparency) • 3rd Party verification • International versus Nation-state/law based • On-going and Dynamic process • Transparent – process • Can be tailored to fit local jurisdictions • Process maintained annually • Mutually goal-aligned with humanity (unlike credit-ratings historically) 20

More Related