Week 17: What's Happening in AI
Big week: U.S. Executive Order on AI, UK AI Safety Summit, VC framework for AI investing, company AI safety policies, new funds, and more

Welcome to this week’s edition of What's Happening in AI, covering the latest in policy, responsible AI, and where AI is going for the Responsible Innovation Labs community.
Below: Founders on data privacy and generative AI reshaping security threats and opportunities. Company, investor, and civil society announcements. Plus, can AI models learn like humans?
First, this week’s big story
U.S. Executive Order on safe, secure, and trustworthy AI and The Bletchley Declaration at UK AI Safety Summit
What’s next:
Biden’s executive order directs government agencies to develop safety guidelines and The Bletchley Declaration sets stage for Korea and France to host next
- Senator Schumer holds the next AI Insight Forum today (more here). Congress hearings continue on potential legislation (ex. future of work).
- UK hosts the AI Safety Summit this week, where the UK, U.S., EU, China, and 25 countries agree on The Bletchley Declaration for shared understanding and ongoing cooperation (more here, here)
- Anthropic’s Jack Clark summarizes the UK’s pre-read report on the safety and security risks of generative AI over the next 18 months (more here)
- Government funds and initiatives: U.S.’ 31 regional tech hubs to spur innovation, strengthen manufacturing, and create good-paying jobs and an AI talent search. UK’s £100M to life sciences and healthcare and £118M job skills package
- G7 introduces voluntary AI code of conduct and the U.N. outlines its AI advisory group
Consider:
Company, investor, and civil society announcements this week
- Safety policies of Amazon, Anthropic, DeepMind, Inflection, Meta, Microsoft, and OpenAI shared ahead of the UK AI Safety Summit, with additional companies expected to attend (ex. Adept, Palantir, Salesforce, Hugging Face, Cohere, Stability AI, Google, X, Mistral, etc.)
- Radical Ventures, a signatory of the Responsible Innovation Labs RAI Commitment, on underwriting Responsible AI: venture capital needs a framework for AI investing (more here)
- Frontier Model Forum announces Executive Director and $10M+ AI Safety Fund
- Comments: “ethics and innovation can live in the same house”, “potential catalyst for responsible private sector innovation”, “sharing data with the government – that one’s going to be tricky”
Why this matters for startups:
Biden’s AI order may have wide impact for startups. At Responsible Innovation Labs, the focus is on a protocol specific to startups and investors to promote innovation, growth, and competitiveness – in consultation with venture investors, industry experts, policymakers, researchers, and civil society.
Stay tuned for an Executive Order primer for founders later this week by Responsible Innovation Labs.
Also in policy:
Stability AI, TikTok, and governments (UK, U.S., Korea, Australia, Germany, Italy) signed an agreement to fight AI-generated CSAM. The UK Online Safety Act becomes law. ~9,270 comments received by the U.S. Copyright Office on AI.
Responsible AI
Product and alignment:
- We need to focus on the AI harms that already exist
- OpenAI launches a preparedness team and challenge focused on frontier risk
- Cruise announces suspension of all operations following California’s suspension
Security and privacy:
- Material Security Co-Founder Ryan Noon on AI threats and opportunities in cyber security
- Google adds generative AI to bug bounty program in commitment to safe and secure AI
- Palantir CEO Alex Karp on data privacy, saying sale of NHS data up to government
Health:
Information:
- ‘Data poisoning’ anti-AI theft tools emerge — but are they ethical? (more here, here)
- Generative AI is playing a surprising role in Israel-Hamas disinformation
- Meta’s AI research head wants open source licensing to change
Resources:
- Data Provenance Explorer (MIT, Cohere, and 10+)
- AI Risk Framework (AI Vulnerability Database, AVID)
- A critical field guide for working with ML datasets (Knowing Machines)
- AI red-teaming is not a one-stop solution to AI harms (Data & Society)
- AI Safety Working Group, more here (ML Commons)
Where AI is going
Humanity:
- Managing AI risks in an era of rapid progress by Bengio, Hinton, and +22 colleagues
- The future of warfare: A $400 drone killing a $2M tank
- ICYMI: AI will first come for women
Capabilities:
- Like humans, can AI grasp related concepts after learning only one? (more here)
- Using simulation to train AI robots to work with humans (more here)
Stack and ecosystem:
- Training LLMs at scale with AMD MI250 GPUs
- LAION, a group behind Stable Diffusion wants to open source emotion-detecting AI
- NVIDIA adds support for TensorRT-LLM
- ICYMI: The half-life of the AI stack
Industry and climate:
- Controlling the ‘foundational data asset’: AI’s impact on Industry 4.0 and industrial decarbonization
- Generative AI’s energy problem today is foundational
Releases and raises:
- Releases: Grammarly’s personalized voice feature, DataGPT’s AI Analyst, Bard’s real-time response, ChatGPT Plus upload feature, Nolano’s English-Hindi NOLIN LLM, Together’s RedPajama-Data-v2
- Raises: Anthropic ($2B from Google)
Additional resources
- Can I remove my personal data from GenAI training datasets?
- ICYMI: Multi-modal prompt injection image attacks against GPT-4V
Sign up for weekly updates in your inbox here. What's Happening in AI is shaped by our Charter themes and Responsible AI work.
View the article here.
Additional Resources and Tools
Get started on the things you can control: