Keeping Up with AI: Updates to NYC Local Law No. 144, Responsible AI Playbooks, News, and more!

Let's dive into the week's news and explore the latest developments in AI and technology.

Was this email forwarded to you? Make sure to subscribe for more!

🧵In this issue...

  • AI Bias Audit: Learn about the new and final changes to NYC Local Law No. 144 and ensure compliance by July 5, 2023.

  • Maximize Your Impact: Discover your Responsible AI Persona and lead the way in advancing RAI in your organization!

  • Listen today: Uncover the 'AI-first, ethics-forward' approach to artificial intelligence and learn when governance went from a lonely conversation to a global imperative.

  • Keeping up with AI News: Stay informed on the latest developments in AI with five critical headlines!

📗 Resources:

🚨 NYC Releases Final Rules for Automated Employment Decision Systems! (Effective July 5, 2023)

Last Thursday, the New York City Department of Consumer and Worker Protection (DCWP) announced the final rules for NYC Local Law No. 144, which includes a new enforcement date of July 5, 2023 (previously April 15, 2023) and added requirements to the AI Bias Audit.

With only ninety days to comply with the updated regulation, ensuring your organization is fully prepared is critical!

👉 Learn everything you need to know about the final changes in our latest blog post, and take control of your compliance journey by requesting a demo with Credo AI today. (Don't wait until it's too late!)

🔎 Maximize Your Impact: Discover your role in Responsible AI!

Responsible AI involves four key roles:
🔎 The Influencer: Responsible for offering insights to shape AI development and deployment.

✅ The Enabler: Responsible for providing organizations with the necessary tools and resources to maximize the benefits of AI.

👩‍💻 The Builder: Responsible for developing and implementing effective AI systems.

⛑️ The Protector: Responsible for minimizing AI risks.

Each persona has specific responsibilities that are crucial to making Responsible AI change management successful.

To discover your very own Responsible AI Persona and maximize your impact on change, take our Responsible AI Persona Quiz today! (Only three questions!)

🎙️Podcasts: Exploring the State of the AI Industry with Credo AI Founder and CEO, Navrina Singh

Get exclusive insights into the current state of the AI industry from Navrina Singh, Founder and CEO of Credo AI, who recently appeared as a guest speaker on two leading technology podcasts.

Don't miss her discussions on responsible AI governance, ethics, and compliance on Founded and Funded, as well as her conversation with TechCrunch's Equity podcast, where she shares the importance of regulation for responsible AI development.

Tune in now!🥸 

📰 AI News:

  • Canada's Federal privacy watchdog probing OpenAI, ChatGPT following complaint. The watchdog's office announced Tuesday that it is initiating the investigation into the U.S.-based company OpenAI because it received a complaint alleging "the collection, use and disclosure of personal information without consent." CBC.

  • Italy became the first Western country to ban ChatGPT. Here’s what other countries are doing. The move has highlighted an absence of any concrete regulations, with the European Union and China among the few jurisdictions developing tailored rules for AI. Various governments are exploring how to regulate AI. CNBC.

  • AI Doesn’t Hallucinate. It Makes Things Up. Humans have a tendency to anthropomorphize machines, but while ChatGPT can produce convincing-sounding text, they don’t actually understand what they’re saying. The term “hallucinate” obscures what’s really going on. Bloomberg.

  • Ethicists fire back at ‘AI Pause’ letter they say ‘ignores the actual harms’. A group of people have written a counterpoint to this week’s controversial letter asking for a six-month “pause” on AI development, criticizing it for a focus on hypothetical future threats when real harms are attributable to misuse of the tech today. Tech Crunch.

  • Red Teaming Improved GPT-4. Violet Teaming Goes Even Further. Reducing harmful outputs isn't enough. AI companies must also invest in tools that can defend our institutions against the risks of their systems. Wired.

And that’s it for today's issue!

Thank you for reading, and we'll see you next week! 🌈