// AI NEWS DIGEST — UPDATE

🤖 AI Daily
2026-04-22

Source: arXiv · The Verge · TechCrunch SYSTEM: ONLINE
🔴 Security

Anthropic's Most Dangerous AI Model Leaked to Unauthorized Users

Anthropic's Mythos model — a cybersecurity tool so powerful the company warned it could be dangerous in the wrong hands — has been accessed by a group of unauthorized users via a third-party contractor and common internet sleuthing tools. The breach raises serious questions about access controls for frontier AI models with offensive security capabilities.

💰 Business

OpenAI Partners with Infosys to Deploy Codex Across Enterprise Clients

OpenAI has partnered with Indian IT giant Infosys to integrate Codex and other AI tools into Infosys' Topaz AI platform, targeting software engineering, legacy modernization, and DevOps workflows. The move gives OpenAI distribution into large enterprises across 60+ countries as it competes aggressively for the enterprise AI market. Infosys already has a similar deal with Anthropic.

💰 Business

SpaceX Strikes $60B Deal to Potentially Acquire AI Coding Platform Cursor

SpaceX has announced an arrangement to acquire AI coding startup Cursor for $60 billion — or pay a $10 billion break fee if the deal falls through. The move comes as Elon Musk prepares an IPO for the SpaceX/xAI/X combo, with Cursor seen as a strategic asset to compete with Anthropic's Claude and OpenAI's Codex in the AI coding wars.

⚖️ Policy

AI Backlash Is Coming for Elections — And It's Already Starting

Communities across the US are mounting resistance to data center projects, social media anger at AI companies is rising to the point of condoning violence, and public concern about AI is intensifying ahead of elections. While campaigns focus on jobs and economy, the political reckoning for AI companies is building and could reshape the regulatory landscape.

🔧 Product

Meta Tracks Employee Keystrokes & Mouse Movements to Train AI Agents

Meta is now recording US employees' mouse movements, clicks, keystrokes, and occasional screenshots via a tool called Model Capability Initiative (MCI), deployed on work-related apps and websites. The data trains AI models to automate computer tasks, raising fresh privacy and workplace surveillance concerns even as Meta insists it won't be used for performance reviews.

🧪 Research

AI Scientists Produce Results Without Reasoning Scientifically

A new arXiv paper finds that AI systems used in scientific discovery can produce valid-looking results while lacking genuine scientific reasoning processes. The work highlights a fundamental gap between outcome quality and methodology fidelity — a critical concern as AI agents become more embedded in research pipelines.

🧪 Research

ARES: Adaptive Red-Teaming and End-to-End Repair of Policy-Reward Systems

ARES introduces adaptive red-teaming techniques to systematically identify and repair vulnerabilities in AI policy-reward systems, offering an end-to-end framework for hardening agents against reward hacking and specification gaming in complex environments.

🧪 Research

Human-Guided Harm Recovery for Computer Use Agents

New research proposes a framework where human oversight enables AI agents to recover gracefully from harmful actions during computer use tasks, providing a safety layer for autonomous agents operating in real-world software environments.