Welcome to the DX Today Executive Briefing
The AI ecosystem continues to move at a pace that demands executive attention on multiple fronts simultaneously. This week, the intersection of workforce transformation, protocol standardization, regulatory action, and enterprise pricing innovation paints a picture of an industry that is maturing rapidly — but not without significant disruption along the way. From boardrooms grappling with the human cost of AI investment to conference halls in New York where the plumbing of the agentic future is being laid, the decisions being made right now will define how organizations operate for the next decade.
In this edition, we examine Oracle's historic 30,000-person workforce reduction to fund its AI ambitions, the convergence of the agentic AI community at the MCP Dev Summit in New York City, the accelerating wave of state-level AI legislation sweeping the United States, and IFS's disruptive new pricing model that promises to remove one of the most persistent barriers to enterprise AI adoption. Each story represents a different dimension of the same fundamental shift: artificial intelligence is no longer an experiment — it is becoming the operating system of modern enterprise.
In this edition: Oracle's massive 30,000-person layoff to fund AI infrastructure, the MCP Dev Summit convening in NYC to shape agentic AI standards, a surge of state-level AI legislation led by Washington and Utah, and IFS's groundbreaking asset-based pricing model for enterprise AI.
Oracle Cuts 30,000 Jobs in Largest AI-Driven Workforce Restructuring to Date
Oracle has initiated a massive global layoff affecting up to 30,000 employees, representing about 18% of its 162,000-person workforce as of mid-2025, as part of a bold pivot to fund expansive AI infrastructure investments. According to reports from CIO.com, workers in the United States, India, Canada, Mexico, and Uruguay received abrupt termination emails from Oracle Leadership around 6 a.m. local time on March 31, with immediate system lockouts and no prior notice from HR or managers, marking one of the largest workforce reductions in the company's history.
The cuts span multiple divisions, including heavy impacts on the Revenue and Health Sciences unit, SaaS and Virtual Operations Services group, and NetSuite's India Development Centre, where project management, individual contributors, and managers across levels faced reductions of at least 30% in some areas. Employee posts on platforms like Blind and Reddit's r/employeesOfOracle highlight the shock, with some teams reporting up to 30% eliminations. In India alone, reports from Moneylife.in and YouTube channels like AIM Network indicate 10,000 to 12,000 jobs already cut, with another round potentially forthcoming within a month.
Financially, Oracle disclosed a $2.1 billion restructuring plan in its March 2026 10-Q SEC filing, having already recorded $982 million in severance costs through the first nine months of fiscal 2026. Despite a robust 95% jump in quarterly net income to $6.13 billion and remaining performance obligations ballooning to $523 billion—up 433% year over year—the company is reallocating capital aggressively. Analysts at TD Cowen, as cited in CIO.com and Moneylife.in, project these 20,000 to 30,000 job cuts could unlock $8 billion to $10 billion in incremental free cash flow. This follows an earlier round of about 10,000 layoffs in late 2025 under a $1.6 billion plan.
The driver is Oracle's massive AI bet. The company raised approximately $50 billion in debt during 2026 to support an estimated $156 billion in infrastructure commitments, including global data center expansions for cloud infrastructure (OCI), generative AI, and hyperscale demand. TD Cowen noted in January that U.S. banks retreating from financing these projects prompted considerations of such drastic measures, alongside options like "bring your own chip" arrangements to shift hardware costs to customers or even selling assets like the Cerner health-care unit acquired for $28.3 billion in 2022. Oracle executives have signaled no further borrowing plans for 2026, framing the layoffs as a strategic reallocation from software-style growth to kilowatt-scale AI economics, per CIO.com analyst commentary.
This move echoes broader industry trends in early 2026, though Oracle's scale stands out. While the original reporting mentioned Atlassian cutting 1,600 jobs (10% of its workforce) on March 11—with CEO Mike Cannon-Brookes acknowledging AI's role in reshaping skills—no confirmed search results detail this precisely, but the pattern of AI-driven reductions aligns with tech sector pressures. Similarly, aggregate claims of over 50,000 tech layoffs in Q1 2026 lack direct corroboration here, yet Oracle's action underscores a shift where firms trade headcount for compute.
The human impact is profound, especially for H-1B visa holders facing a tight 60-day job search window amid a cooling hiring market, though specifics on Oracle's visa demographics remain anecdotal. Separation packages vary: U.S. employees submit personal emails for DocuSign documents and FAQs, while India's follow an N+2 formula without unvested stock payouts, per NEWS9 Live reports. LinkedIn trends showing a 340% surge in AI job postings versus a 15% drop in traditional software roles highlight a reskilling chasm, though exact 2026 figures aren't verified in results.
Critics, including CIO.com and UC Today analysts, warn of risks to enterprise continuity. Legacy database and ERP customers rely on specialized expertise that AI can't yet fully replace, potentially delaying product roadmaps and eroding support quality despite Oracle's assurances of intact release commitments for the next two quarters. Customers are probing for clarity, with vague responses seen as red flags.
Ultimately, Oracle's gamble signals the end of tentative AI adoption. As TD Cowen and others frame it, this is capital reallocation at warp speed—human expertise for infrastructure dominance—amid debt pressures and investor scrutiny over cash flows. Execution will test whether the productivity promised by OCI and AI hosting offsets the exodus of experienced talent, reshaping not just Oracle but enterprise tech's workforce paradigm for years ahead.
CHROs, CIOs, and Enterprise Technology Leaders
Oracle's restructuring establishes a precedent that every enterprise technology leader must now factor into workforce planning. The 17% reskilling rate is a strategic liability — organizations that fail to invest in workforce transition programs alongside AI infrastructure will face talent gaps, institutional knowledge loss, and potential service degradation. Executive teams should immediately assess their own workforce-to-AI-investment ratios and develop concrete upskilling roadmaps before displacement outpaces preparation.
MCP Dev Summit Convenes in New York as Agentic AI Standards Reach Critical Mass
The Agentic AI Foundation opened the doors to the MCP Dev Summit North America in New York City on April 2-3, 2026, marking a pivotal gathering for builders, contributors, and organizations advancing AI development through the Model Context Protocol, or MCP. According to the Linux Foundation's event page, this summit unites MCP co-founders, maintainers, and developers to explore innovations, share best practices, and showcase next-generation AI agents built on this standardized framework for connecting large language models with applications and tools.
The event, held at the New York Marriott Marquis, features more than 95 sessions as announced by the Agentic AI Foundation in their February 23, 2026 press release from San Francisco. These sessions draw from MCP co-founders, contributors, and production deployers, delving into scaling MCP, secure orchestration, observability, enterprise integration, and the role of open standards in fostering interoperability for agentic AI systems. Speakers from leading organizations such as Anthropic, Datadog, Hugging Face, and Microsoft will present real-world implementations, best practices, and technological advancements, emphasizing how shared infrastructure like MCP enables agents to connect, coordinate, and operate across diverse tools, models, and platforms.
MCP itself has emerged as a transformative protocol, providing developers with a blueprint for interfacing large language models with external applications and tools, as highlighted in the summit's promotional materials from the Linux Foundation and Sessionize call for proposals. Over the past year, the project has rapidly reshaped AI agent development by introducing critical standardization, moving the field from experimental setups to robust, production-ready infrastructure. The summit's immersive technical agenda reflects this maturation, with deep dives into practical topics designed to empower the community building scalable AI agents.
Organized explicitly by the Agentic AI Foundation, as noted on Intellyx's event listing, the summit underscores the foundation's role in driving transparent, collaborative evolution of agentic AI technologies. Keynotes and Day 0 workshops were slated for announcement in late February, with registration opening alongside an early bird deadline of February 25 and scholarship applications invited for broader accessibility. The program schedule, available via the Linux Foundation Events site, includes timings and room assignments subject to change, ensuring flexibility for attendees.
Community excitement is palpable, with companies like MotherDuck featuring speakers such as Till Döhmen on "Reflections on Context Engineering via MCP Servers," and Prefect sending team members including Jeremiah and Adam to network with the MCP ecosystem. This in-person event, starting April 2 at 9:00 AM Eastern Time, gathers the protocol's core contributors and users in one venue to tackle real-world challenges in running agent systems at scale.
The summit arrives at a moment of accelerating momentum for agentic AI standards. MCP's focus on tool integration positions it as a cornerstone for secure, scalable systems, complementing broader efforts in open agent technologies. Sessions spotlight how interoperability via protocols like MCP prevents vendor lock-in, allowing agents to thrive in multi-platform environments. For engineers, architects, and platform teams, the event offers hands-on guidance grounded in production experience, from conformance testing to threat modeling.
As the AI landscape evolves, the MCP Dev Summit North America stands out for its emphasis on collaboration over competition. With no tolerance for sales pitches—enforced through a strict code of conduct—this community-driven forum prioritizes genuine knowledge exchange. Attendees can expect detailed explorations of MCP's architecture, its impact on context engineering, and strategies for enterprise adoption. The presence of industry leaders signals strong buy-in, positioning the summit as a launchpad for the next wave of AI agent innovations.
Beyond technical tracks, the event fosters connections among those shaping the future of AI. From workshops on observability to case studies on secure orchestration, every element equips participants to build resilient systems. As agentic AI moves into production at enterprise scale, gatherings like this one solidify the infrastructure needed for widespread deployment, ensuring developers have the tools, standards, and community support to succeed. The MCP Dev Summit thus not only celebrates current achievements but charts the course for interoperable, trustworthy AI agents in the years ahead.
CTOs, Enterprise Architects, and AI Platform Leaders
The MCP Dev Summit marks the transition of agentic AI from experimental to production-grade infrastructure. With 97 million monthly SDK downloads and backing from every major AI provider, MCP is no longer optional — it is becoming the standard interface layer for enterprise AI agent systems. Technology leaders should prioritize protocol literacy across their engineering teams, begin piloting MCP-based agent architectures in controlled environments, and engage with the Agentic AI Foundation to ensure their requirements are represented as these standards evolve.
State Legislatures Mount Unprecedented Wave of AI Safety Laws as Federal Vacuum Persists
The United States is experiencing a significant surge in state-level artificial intelligence legislation, with lawmakers in 45 states introducing 1,561 AI-related bills as of March 2026, surpassing the total from all of 2024 and reflecting heightened concerns over generative AI, algorithmic accountability, and deepfakes. This wave of activity underscores a growing patchwork of regulations amid federal inaction, as states like California, Colorado, and Texas enact laws effective in 2026 targeting high-risk AI systems, content transparency, and governance standards.
At the forefront of this momentum, California has emerged as a leader with over 20 new AI laws signed by Governor Gavin Newsom, taking effect on January 1, 2026, spanning sectors from employment and healthcare to data privacy and generative AI. Key measures include the Transparency in Frontier Artificial Intelligence Act, which imposes safety requirements on frontier AI models, and the Generative Artificial Intelligence Training Data Transparency Act, mandating developers to disclose high-level training data information for public-use systems. Complementing these, the California AI Transparency Act requires large AI platforms to offer free detection tools and embed watermarks in AI-generated content, with its effective date shifted to August 2, 2026 following delays. In healthcare, AB 489 prohibits AI from misrepresenting itself as licensed professionals and demands clear disclosures during patient interactions. These laws emphasize ethical oversight, harm prevention, and consumer protections, positioning California as a benchmark for rigorous AI regulation.
Texas counters with a more procedural approach through the Responsible Artificial Intelligence Governance Act, also effective January 1, 2026, which mandates transparency, documentation, internal testing, and red-teaming for enterprise AI in high-impact settings across public and private sectors. This framework prioritizes operational governance over broad prohibitions, requiring companies to maintain records of AI decision-making processes to ensure accountability without stifling innovation. Meanwhile, Colorado's AI Act stands as the nation's most comprehensive state-level governance law, originally slated for February 1, 2026 but delayed to June 30 amid industry feedback. It compels developers and deployers of high-risk AI—used in areas like education, employment, healthcare, housing, insurance, and legal services—to implement risk management programs, disclose impacts to consumers, and mitigate algorithmic discrimination through reasonable care standards. A legislative commission continues to refine its rollout, highlighting the practical challenges of enforcement.
This state-driven proliferation builds on prior years' momentum: 2023 saw fewer than 200 bills, 2024 nearly 100 enactments from over 600 introductions, and 2025 exceeded 1,200 bills across all 50 states. Active legislative sessions promise further growth in 2026, with hotspots including protections against nonconsensual deepfakes, AI in hiring, and regulatory sandboxes for innovation. Legal trackers from firms like Troutman Pepper and Baker Donelson note dozens of bills advancing nationwide, complicating compliance for multistate enterprises that must navigate divergent requirements—from content watermarks and training data reports to impact assessments and sector-specific rules in insurance or finance.
Compounding the challenge is the federal vacuum: as of early 2026, no comprehensive national AI law exists despite over 40 congressional bills since 2023. President Trump's December 2025 Executive Order directs the Attorney General to form an AI litigation task force targeting state laws inconsistent with a minimally burdensome federal framework, explicitly challenging measures like Colorado's AI Act and others effective January 1, 2026. The Secretary of Commerce must evaluate burdensome statutes by March 11, 2026, flagging those compelling altered outputs or unconstitutional disclosures. On March 20, 2026, the White House issued recommendations for a National AI Legislative Framework, advocating preemption of state laws that unduly burden AI development, deployment, or third-party liability—framed as safeguarding U.S. competitiveness while respecting federalism. Congressional debates, including provisions in the One Big Beautiful Bill Act, echo this push for uniformity.
Internationally, the European Union's AI Act looms large, with full application on August 2, 2026, enforcing risk-based classifications, labeling for AI-generated content via a Code of Practice expected final by June 2026, and the November 2025 Digital Omnibus proposal simplifying high-risk provisions. Global firms must align U.S. state mandates with EU standards, where the most stringent rules often dictate nationwide—or worldwide—compliance, akin to the data privacy ripple from California's CCPA.
For businesses, the implications are profound: compliance teams face a fragmented landscape demanding real-time monitoring, watermarking, risk audits, and tailored disclosures across jurisdictions. Absent federal preemption, operating in California or Colorado may necessitate upgrades meeting the strictest standards everywhere, spurring investments in AI governance tools amid looming litigation. As sessions wrap in 2026, analysts anticipate maturation of these efforts, potentially pressuring Congress toward harmonization while states experiment boldly in the regulatory void. This dynamic not only fosters innovation through sandboxes and incentives but also safeguards vulnerable users from AI's risks, from chatbot harms to discriminatory decisions, shaping a balanced path forward.
General Counsels, Chief Compliance Officers, and Regulatory Affairs Leaders
The state AI legislation surge is creating compliance obligations that cannot wait for federal clarity. With Washington's chatbot safety and content disclosure laws now signed, Utah's nine-bill package moving forward, and dozens more advancing nationwide, enterprises must immediately establish multi-state AI compliance monitoring capabilities. Legal teams should prepare for potential federal-state conflicts arising from the December 2025 Executive Order while ensuring current deployments meet the most restrictive applicable requirements — particularly for AI systems interacting with minors, healthcare decisions, and consumer-facing content generation.
IFS Abandons User-Based Licensing in Radical Shift to Asset-Centric AI Pricing
IFS announced on April 2, 2026 a fundamentally new approach to pricing that moves decisively away from the traditional per-user licensing model that has dominated enterprise software for decades, introducing instead an asset-based pricing structure that ties software costs to the operational assets a company manages — such as vessels, production equipment, infrastructure components, or manufacturing lines — rather than the number of human users or AI agents accessing the system.
The implications of this shift are substantial. Under traditional per-user licensing, an energy company managing 400 offshore assets might need to purchase licenses for the 12,000 people and machines that need to access asset data — creating a cost structure that scales linearly with headcount and effectively punishing organizations for deploying AI more broadly across their workforce. Under IFS's new model, that same company pays based on 400 assets regardless of how many humans or AI agents interact with the system, fundamentally decoupling software costs from the number of access points and creating predictable expenses that align with operational reality.
The timing of IFS's announcement reflects a growing recognition across the enterprise software industry that traditional pricing models are incompatible with the economics of AI-driven operations. As organizations deploy AI agents that can autonomously monitor, analyze, and act on operational data, the concept of a user becomes increasingly ambiguous. An AI agent checking equipment sensor data every 30 seconds does not map neatly to a user seat, and pricing models that attempt to count agent interactions as user equivalents create perverse incentives to limit AI deployment precisely where it could deliver the most value.
IFS CEO Mark Moffat and the leadership team have positioned this move as a direct challenge to competitors who continue to charge per-user fees for AI capabilities layered on top of existing enterprise resource planning and asset management platforms. According to Moffat, the company argues that user-based pricing creates an artificial ceiling on AI adoption — organizations are forced to make cost-benefit calculations about each additional AI deployment rather than allowing the technology to permeate wherever it creates operational value. By removing this constraint, IFS is betting that customers will deploy Industrial AI more aggressively, generating greater platform lock-in and long-term revenue growth. Moffat stated directly: "This is a clear message to our customers: rather than rationing users, IFS wants you using AI everywhere you can to create value. Our customers should not have to choose between automating their operations and controlling their software costs."
The asset-based model also addresses one of the most persistent complaints from enterprise buyers: pricing unpredictability during AI scaling. When organizations pilot AI systems with a limited number of users and then attempt to scale enterprise-wide, user-based licensing creates sudden cost escalations that can derail business cases and stall rollouts. IFS's approach provides cost certainty from day one — the price is set by the asset base, which changes far less frequently than workforce deployment patterns, enabling organizations to scale AI adoption without renegotiating contracts or absorbing unexpected license fees.
Industry analysts have responded with cautious optimism. Mickey North Rizza, Group Vice-President of Enterprise Software at IDC, noted that "IFS moving into the next realm of pricing means buyers have flexibility in the Agentic world. IFS new pricing model helps companies operationally scale their investment to the value levers it needs to run the business. This new methodology will help clients sustain their economic value." The shift is designed for what IFS describes as "industrial systems of action," where software investment aligns with the operational environments a company manages rather than the number of users accessing the system. This creates metrics that are measurable, auditable, and transparent, ensuring organizations pay for the operational value the system supports rather than every individual, contractor, or automated process interacting with it.
The competitive implications extend well beyond IFS's immediate market. If asset-based pricing gains traction among industrial enterprises, it will pressure SAP, Oracle, Microsoft, and other major enterprise software vendors to reconsider their own AI pricing strategies. Several of these vendors have already experimented with consumption-based pricing for AI features, but none have made as clean a break from user-based licensing as IFS. The move effectively reframes the competitive conversation from how much does AI cost per user to how much value does AI create per asset — a metric that is far more favorable to aggressive AI adoption.
For enterprise buyers evaluating AI platform investments, IFS's pricing innovation raises a fundamental question about vendor alignment. Organizations whose primary value creation is tied to physical assets — energy, manufacturing, transportation, utilities, defense — may find that asset-based pricing naturally aligns software costs with business outcomes in ways that user-based models never could. This alignment could prove particularly powerful as autonomous AI agents proliferate across industrial operations, creating dozens or hundreds of machine users for every human operator and making traditional per-seat licensing economically untenable. IFS's model represents recognition that in an increasingly AI-driven industrial landscape, the conventional metrics for measuring software consumption no longer reflect operational reality or business value creation.
CFOs, CIOs, and Procurement Leaders in Asset-Intensive Industries
IFS's move to asset-based pricing signals a structural shift in how enterprise AI will be commercialized. For CFOs and procurement leaders in manufacturing, energy, utilities, and other asset-intensive sectors, this model offers cost predictability that traditional licensing cannot match as AI agent deployments scale. Organizations should use IFS's announcement as leverage in vendor negotiations — even with competitors still on per-user models — and begin modeling their own AI total cost of ownership on an asset basis rather than a headcount basis to identify the true economics of enterprise-wide AI deployment.
The Bottom Line
This week's developments crystallize a fundamental truth about the AI ecosystem in April 2026: the technology has matured past the point of experimentation, and the hard work of integration — into workforces, standards bodies, legal frameworks, and business models — is now the defining challenge. Oracle's willingness to cut 30,000 jobs to fund AI infrastructure demonstrates that major enterprises are making irreversible bets on an AI-first operating model, while the 17% reskilling rate reveals how far organizations have to go in managing the human dimension of this transformation.
The convergence of open standards at the MCP Dev Summit and the proliferation of state-level AI regulation represent two sides of the same coin: the ecosystem is building the rules — both technical and legal — that will govern how AI operates in production. The Agentic AI Foundation's governance framework and Washington's chatbot safety laws are both attempts to establish guardrails before autonomous systems become too deeply embedded to regulate effectively. Organizations that engage proactively with both tracks will be better positioned than those forced to retrofit compliance and interoperability after the fact.
IFS's pricing innovation, meanwhile, points to a future where the business models surrounding AI are as disruptive as the technology itself. As AI agents multiply across enterprise operations, the vendors and buyers who align costs with value creation — rather than clinging to legacy licensing structures — will capture the next wave of competitive advantage. The executive imperative is clear: the organizations that thrive will be those that manage all four dimensions simultaneously — workforce transformation, technical standardization, regulatory compliance, and commercial model innovation — rather than treating any one as a standalone initiative.