AI & Data Privacy Trends: Cerebras IPO, Healthcare Data Sharing, Trump's AI Vetting Plan
Here are today's top AI & Tech news picks, curated with professional analysis.
OpenAI's cozy partner Cerebras is on track for a blockbuster IPO | TechCrunch
Expert Analysis
Cerebras Systems is gaining significant attention in the AI hardware market due to its innovative AI chip technology, particularly the Wafer-Scale Engine (WSE). The company has established a strategic partnership with OpenAI, which is believed to be a major contributor to its growth and market valuation.
This partnership suggests that Cerebras's high-performance AI computing capabilities play a crucial role in the development of cutting-edge Generative AI models by companies like OpenAI. The company is reportedly on track for a blockbuster Initial Public Offering (IPO), with its market value expected to further increase amidst accelerating investments in AI infrastructure.
- Key Takeaway: Cerebras Systems, a key AI hardware partner for OpenAI, is poised for a significant IPO, highlighting the growing investment in specialized AI infrastructure.
- Author: Julie Bort
Meta and TikTok Are Getting Your Data From State Healthcare Sites: Report
Expert Analysis
According to a Bloomberg report, all 20 state-run healthcare marketplaces in the U.S. include advertising trackers that share information with major tech companies such as Meta, TikTok, Snap, and Google. Approximately seven million Americans purchased health insurance through these state exchanges in 2026, and their personal information may have been shared.
The shared data can include sensitive biographical details such as ZIP codes, sex, citizenship status, and race. Furthermore, trackers on specific pages, like Medicaid-related web pages in Rhode Island or pages for noncitizen pregnant Marylanders, could reveal additional details about a person's financial status and need for assistance based on their site visits.
Meta states that it does not permit advertisers to share sensitive information and that its systems are designed to detect and filter out potentially sensitive data, yet the collection and sharing practices remain opaque. Following the report, several states have already removed some trackers from their exchange websites.
- Key Takeaway: State-run healthcare websites are sharing sensitive user data, including demographic and potentially financial information, with major tech companies like Meta and TikTok via ad trackers, raising significant privacy concerns.
- Author: AJ Dellinger
Trump Reportedly Considering Executive Order Aimed at Vetting New AI Models
Expert Analysis
According to anonymous sources cited by the New York Times, President Donald Trump is reportedly considering an executive order aimed at vetting new AI models. This executive order would establish an “A.I. working group” composed of government and tech industry representatives to discuss oversight plans, including a formal government review process for new AI models.
This working group could determine which government agencies, such as the NSA, the White House Office of the National Cyber Director, and the office of the director of national intelligence (currently Tulsi Gabbard), would be involved. This initiative contrasts with the previous administration's approach, where the mission of the Center for A.I. Standards and Innovation (CAISI) under NIST, established during the Biden administration for vetting AI models, was reportedly changed after Trump took office.
Earlier policy documents from the Trump White House, such as “A National Policy Framework for Artificial Intelligence,” advocated for very soft regulations, which significantly clashes with the current considerations. The proposed vetting plan has drawn comparisons to one being developed in Britain, where multiple government entities seek to vet AI models for safety, particularly after Anthropic's unreleased Claude Mythos Preview model was deemed too dangerous, especially concerning cybersecurity.
- Key Takeaway: Trump is reportedly considering an executive order to establish a government-industry working group for vetting new AI models, signaling a potential shift towards more stringent AI regulation compared to his previous stance, influenced by international efforts and concerns over advanced AI safety.
- Author: Mike Pearl

