🔥 Responsible AI Weekly: Summit takeaways, what is AI Governance, and more!

Let's dive into the week's news and explore the latest developments in AI and technology. ⚡️

Was this email forwarded to you? Make sure to subscribe for more!

🧵In this issue...

  1. Operationalizing RAI: Understand AI Governance, the importance of its role in today's world, and what it means for you and your organization.

  2. Ecosystem: Read about Credo AI's participation in Meta's Open Loop program and the CEO's testimony before the U.S. House of Representatives Committee.

  3. 2022 Global Responsible AI Summit: Learn about the 5 main takeaways from our inaugural Summit. 🚀

⚙️ What is AI Governance?

AI has the power to benefit humanity, but it can also harm society if left unchecked. To ensure that AI's benefits outweigh its possible harm, we must consider ways to mitigate risks and maximize benefits.

The solution we propose? AI Governance.

Read the first blog post of our AI Governance series to learn more about the definition of AI Governance and its importance in today's world. (Spoiler alert: we use an airplane analogy to explain AI Governance. Have a look! 👀)

🌎 Credo AI's Participation in Meta's Open Loop.

​​In early June, Credo AI was invited to participate in the EU AI Act Open Loop program, which is part of a broader experimental governance initiative supported by Meta (previously Facebook).

The program builds on the collaboration of regulators, governments, tech businesses, academics, and civil society to inform the AI governance debate with empirical and evidence-based policy recommendations.

We at Credo AI are really proud to be a part of this initiative and to make a tangible impact on the current AI regulation landscape!

Learn about Open Loop's program and findings here.

🤖 Credo AI CEO testifies before the U.S. House of Representatives Committee on Science, Space and Technology Subcommittee on Research and Technology.

Left to right: Jordan Crenshaw (U.S. Chamber of Commerce), Navrina Singh (Credo AI), Ranking Member Feenstra, Chairwoman Stevens, Elham Tabassi (NIST), Dr. Charles Isbell (Georgia Institute of Technology)

On September 29th, Navrina Singh, Credo AI Founder & CEO, testified before the U.S. House of Representatives Committee on Science,

Space and Technology Subcommittee on Research and Technology hearing entitled “Trustworthy AI: Managing the Risks of Artificial Intelligence” alongside Elham Tabassi (NIST), Jordan Crenshaw (U.S. Chamber of Commerce), and Dr. Charles Isbell (Georgia Institute of Technology).

 Her testimony emphasized three key areas:

  1. Focus on the full AI lifecycle: From design to development to testing and validation, production, and use.

  2. Context is paramount: Achieving trustworthy AI depends on a shared understanding that governance and oversight of AI are industry-specific, application-specific, and data-specific to ensure that it is fit for purpose.

  3. Transparency reporting and system assessments are critical for Responsible AI governance: Reporting requirements that promote and incentivize public disclosure of AI system behavior act as key drivers for the establishment of standards and benchmarks.

You can view the full session here and read her written testimony here.

🚀 5 Takeaways from our #CredoAISummit:

At the 2022 Global Responsible AI Summit, we had the pleasure of hosting seventeen experts from multidisciplinary fields who provided actionable insights on how enterprises can be more inclusive, transparent, and fair with their AI technology.

If you missed the Summit or want to relive some of our best moments, click here to check out our latest blog post with 5 key takeaways from the event. (You wouldn't want want to miss it! 😉)

And that’s it for today's issue!

Thanks for reading, and we'll see you next week! 🌈