OpenAI Has No Moat — The Day a $3.48 AI Beat the $30 One
Summary
DeepSeek V4's public release on April 24, 2026, delivered a triple shock to the global AI industry, simultaneously demonstrating the limits of American semiconductor export controls, shattering premium AI pricing conventions, and igniting a landmark intellectual property dispute. The model's successful training of a 1.6-trillion-parameter frontier system on Huawei's Ascend 950PR chips — hardware that American restrictions were explicitly designed to make unavailable — constitutes the most direct empirical challenge yet to the containment strategy underpinning Washington's AI policy. At $3.48 per million tokens, DeepSeek V4-Pro's API pricing is approximately one-tenth that of OpenAI's GPT-5.2, representing not a competitive discount but a structural signal that AI is transitioning from a scarce premium product to commoditized, utility-grade infrastructure. Concurrent accusations from Anthropic and OpenAI — alleging that 24,000 fraudulent accounts were used to harvest 16 million proprietary conversations for model distillation — have raised fundamental questions about the boundaries of intellectual property in an era where open-source AI models freely circulate. These converging disruptions point toward a fundamental restructuring of the AI industry's competitive landscape, business models, and geopolitical alignments that will reshape everything from API pricing strategy to chip export policy over the next two to five years.
Key Points
Huawei Chip Self-Sufficiency Proves the Export Control Paradox
DeepSeek V4 achieved what American policymakers spent three years trying to prevent: successfully training a 1.6-trillion-parameter frontier AI model on non-NVIDIA hardware — specifically Huawei's Ascend 950PR chip. This development is the clearest empirical evidence yet that the strategy of semiconductor export controls, designed to deny China the computational resources needed for frontier AI development, has produced a paradoxical outcome that policymakers need to urgently reckon with. By systematically blocking NVIDIA A100s, H100s, and Blackwell series chips from Chinese buyers starting in 2022, Washington provided Huawei and the broader Chinese semiconductor ecosystem with an existential mandate to build domestic alternatives — and existential mandates are extraordinarily powerful drivers of innovation. The Ascend 950PR now reportedly performs at approximately 85% of an NVIDIA H100's throughput: not quite equivalent, but more than sufficient to train models that compete head-to-head with Silicon Valley's best work. The structural lesson is not that export controls are ineffective in the short term — they demonstrably slow progress by creating friction and supply constraints. The lesson is that export controls against a motivated, well-resourced competitor operating at national scale cannot permanently prevent capability development; they can only redirect it, and the redirected effort produces domestic alternatives that wouldn't otherwise exist. The critical policy question now is not whether these controls failed with DeepSeek V4 — they clearly did — but whether policymakers will draw the right conclusions and recalibrate strategy, rather than doubling down on restrictions that have already demonstrated their structural limits.
The $3.48 API Price and the Dawn of AI Commoditization
DeepSeek V4-Pro's API pricing at $3.48 per million tokens is the most direct signal yet that the AI industry is entering its commoditization phase — a transition that has profound implications for every company currently building a business on premium AI pricing. OpenAI's GPT-5.2 at $30 per million tokens represents approximately a 10x premium, a premium that the benchmark data no longer justifies when DeepSeek scores within 1-2 percentage points on both MMLU and HumanEval assessments. The structural significance of this price point extends far beyond competitive pressure on OpenAI's margins — it marks a threshold at which AI transitions from a scarce product accessible only to well-capitalized enterprises to something approaching a utility, the way electricity, internet bandwidth, and cloud storage became utilities as cost curves declined through successive technology generations. Every major technological infrastructure transition in modern history has followed this pattern: initially expensive and exclusive, then progressively cheaper and more broadly accessible, eventually near-free and effectively universal. AI is now visibly on that curve, and the $3.48 price point may be the tipping point that converts latent demand into mass adoption across the millions of companies and organizations that have been monitoring AI without deploying it at scale due to cost constraints — particularly in emerging markets where AI budgets are measured in thousands of dollars annually rather than millions. The cascading effect of that adoption wave, combined with the open-source availability of DeepSeek V4 under MIT license for self-hosted deployment, suggests that the next 12-24 months will see broader real-world AI application than any prior period in the technology's history.
The IP Distillation Controversy: Open Source Meets Intellectual Property Head-On
The allegations leveled by Anthropic and OpenAI against DeepSeek represent the most significant intellectual property dispute the AI industry has yet produced, and the resolution will set rules that govern open-source AI development for the next decade. The core claim is that DeepSeek created 24,000 fraudulent accounts, systematically harvested 16 million proprietary Claude conversations, and used that data to distill competitive model capabilities — a form of organized knowledge extraction that, if proven, constitutes theft of proprietary training signals rather than conventional competitive intelligence gathering. The legal complexity is compounded by the fact that model distillation exists in a genuinely unsettled legal gray zone, with no binding precedent at this scale in any major jurisdiction, and by the fact that model outputs are widely understood within the research community as potential training data. The historical irony cannot be overlooked: OpenAI itself has faced — and continues to face — multiple lawsuits alleging that its training pipeline ingested copyrighted internet content without authorization, raising uncomfortable questions about where the moral authority to prosecute this suit actually rests. The broader question this dispute forces the industry to confront is fundamental: in an era where AI outputs can be used as training data, where model architectures are increasingly published, and where frontier capabilities can be replicated at a fraction of original development cost, what constitutes ownable intellectual property? The eventual legal resolution — across American, European, and Chinese courts that will likely reach different conclusions — will shape the rules of open-source AI development globally, determining whether the open-source ecosystem operates under constant legal threat or with defensible legitimacy.
The Moat Has Fallen: Open Source Closes the Gap With Unprecedented Speed
The competitive trajectory demonstrated by DeepSeek V4 — achieving within 1-2 percentage points of GPT-5.2's benchmark performance at one-tenth the API cost — confirms a pattern that has been emerging for years but is now structural and irreversible: open-source AI models are closing the capability gap with frontier closed-source models faster with each successive generation. With GPT-4, the open-source gap closed within roughly 18 months. With Claude 3.5, within about 12 months. With GPT-5.2, the gap appears to have closed nearly simultaneously — before premium pricing could generate sufficient returns on the development investment. If this acceleration continues, the closed-source performance moat will cease to be a commercially viable business foundation, because open-source models will routinely match or exceed closed-source performance at each capability level before premium pricing can be defended. OpenAI's $300 billion valuation was built on an assumption of sustained performance advantage — that closed-source development would maintain a meaningful and durable capability lead over the open-source alternative. That assumption is no longer defensible on current evidence. The 12,000-plus GitHub forks of DeepSeek V4 within 48 hours of release represent a distributed improvement engine that compounds with time and community contribution, potentially accelerating the model beyond its launch benchmarks within months through specialized fine-tuning and architectural optimization across dozens of domain applications.
AI Industry Business Models Face a Structural Reckoning
DeepSeek V4's emergence forces a direct confrontation with the fundamental tension at the core of the current AI industry structure: every major AI company's primary revenue model — charging for API access to proprietary models — depends on those models being significantly better than freely available open-source alternatives and not easily replicable at a fraction of the development cost. Both of those conditions are now simultaneously compromised, creating an existential strategic challenge that incremental adjustments cannot resolve. The path forward that is most viable for OpenAI, Anthropic, and Google is an accelerated pivot toward a value-added service model: enterprise deployments with compliance and security guarantees, fine-tuning and domain specialization services, integration support, data pipeline infrastructure, and the kind of managed platform experience that creates durable switching costs. This is not a new observation — both OpenAI and Anthropic have been moving in this direction — but the urgency has increased dramatically following DeepSeek V4's demonstration. The cloud computing analogy is instructive: AWS did not remain competitive by offering cheaper compute hours alone; it built a comprehensive platform of managed services, developer tools, and ecosystem integrations that created switching costs far more durable than raw pricing. The AI companies that will survive the commoditization of base model access are the ones that execute that same transition — building service layers with genuine stickiness — before their enterprise customer bases complete the migration calculus that DeepSeek V4 has made suddenly urgent and economically compelling.
Positive & Negative Analysis
Positive Aspects
- Genuine Democratization of AI Access Across Markets That Have Been Priced Out
The most meaningful consequence of DeepSeek V4's $3.48 pricing is not the competitive pressure it places on OpenAI's margins — it's the opening of frontier-grade AI access to organizations and markets that were previously excluded entirely by cost. When frontier API access costs $30 per million tokens, the calculus is brutally simple for a Series A startup in Nairobi or a healthcare NGO operating in rural Indonesia: the math doesn't work, and AI adoption gets deferred indefinitely. At $3.48, that calculus changes completely, enabling a level of access that was until very recently available exclusively to well-capitalized technology companies in high-income markets. The MIT license release compounds this effect by enabling self-hosted deployment — meaning organizations with available compute infrastructure can run the full model at marginal cost, eliminating even the per-token fee for high-volume applications. This is real democratization with tangible distributional consequences: the wave of AI-powered innovation that follows will emerge from markets and problem spaces that premium pricing had effectively walled off.
- Enterprise Cost Reduction That Unlocks Substantial Innovation Reinvestment
A 90% reduction in AI API costs is not merely a line-item saving — it is a meaningful reallocation of capital that can fundamentally change what organizations choose to build and at what pace. Companies currently spending millions of dollars annually on AI API access will find, upon switching to DeepSeek V4 or the price-competitive alternatives that follow in its wake, that substantial capital is suddenly available that was previously committed to infrastructure. That freed capital can be redirected into AI-powered product development, specialized fine-tuning experiments, expanded AI engineering teams, or passed along as margin improvement that strengthens competitive position. The AWS analogy is instructive again: when cloud computing dropped infrastructure costs by 90% relative to maintaining owned data centers, the freed capital across the enterprise software ecosystem fueled a decade of product innovation that would not have been economically viable at prior cost structures. The enterprise AI market stands at an analogous threshold — companies that have been using AI narrowly can now consider deploying it broadly as a default infrastructure layer, creating a compounding innovation cycle that was inaccessible at $30-per-million-token pricing.
- Open Source Ecosystem Strength and the Compounding Power of Collective Intelligence
DeepSeek V4's release under the MIT license is not merely a business strategy or competitive tactic — it is a meaningful contribution to one of the most productive research ecosystems in technology history, and the compounding effects of that contribution will extend far beyond the model's current benchmark performance. When a 1.6-trillion-parameter frontier model is made freely available for modification, fine-tuning, and redistribution, the global developer community collectively becomes its permanent R&D department, and that community can allocate talent and computational resources to specialized problems at a speed and scale no single organization could match. The 12,000-plus GitHub forks within 48 hours represent the opening salvo of an improvement effort that will compound over months and years: specialized derivatives for medical diagnosis, legal document analysis, scientific literature synthesis, financial risk modeling, and dozens of other domains are already in development. The principle that collective intelligence consistently outpaces single-organization development is well-established in technology history — Linux, which now powers the vast majority of global server and cloud infrastructure, was produced by a decentralized community without the organizational resources of its commercial competitors. As more specialized derivatives of DeepSeek V4 are deployed and real-world performance data accumulates, insights flow back into the research community, improving future iterations in a feedback loop that benefits the entire ecosystem.
- Breaking the AI Chip Monopoly and Diversifying the Hardware Supply Chain
Until DeepSeek V4's demonstration, the practical reality of AI training at frontier scale was near-total NVIDIA dependence — not because alternatives didn't exist, but because none had been demonstrated at this capability level in actual production at scale. DeepSeek's training of a 1.6-trillion-parameter model on Huawei Ascend 950PR hardware changes that calculus fundamentally and permanently. This is the first credible empirical demonstration that the hardware layer of frontier AI training is more diverse than the industry's NVIDIA-centric supply chains had assumed, and it provides a proof point that other chip manufacturers can reference when competing for frontier AI workloads. AMD's MI300 series, Intel's Gaudi accelerators, and a range of emerging market chip designs now have a concrete competitive narrative to build on. Real chip market competition — with multiple credible alternatives rather than one overwhelmingly dominant provider — drives prices down, improves supply resilience, and reduces the single-vendor dependency risk that has made AI infrastructure vulnerable to supply shocks and geopolitical disruptions.
Concerns
- IP Theft Allegations Threaten the Incentive Structure for Foundational AI Research
The most serious long-term risk embedded in the DeepSeek V4 story is not competitive pressure on any individual company's pricing — it's what the IP distillation allegations, if proven true, mean for the systemic incentive structure underlying foundational AI research globally. If it becomes established industry understanding that a competitor can acquire frontier model capabilities by harvesting millions of proprietary conversations through fraudulent accounts and using that data for distillation, the message to the entire field is deeply corrosive: the most efficient path to capability is not investing billions in original research but systematically extracting the outputs of those who did. In an industry where the economics of foundational research are already difficult — frontier model development requires massive capital commitment with uncertain and extended timelines to return — any further erosion of the competitive moat associated with that investment makes it increasingly hard to justify at the board and investor level. The downstream consequence, if this logic normalizes across the industry, is that organizations reduce foundational research commitments in favor of faster-cycle derivative work, and the frontier itself stops advancing. The legal resolution of the Anthropic and OpenAI suits will be critical — not just for the specific parties, but as a signal to the entire industry about whether foundational research investment is protected by defensible legal frameworks.
- Unverified Safety Standards Create New Risk Vectors at Unprecedented Scale
DeepSeek V4's open-source release of a 1.6-trillion-parameter model raises safety questions that are being dangerously underweighted in a public conversation dominated by pricing and geopolitics. OpenAI and Anthropic subject their closed-source models to extensive red-team evaluation, adversarial testing, and alignment assessment before public release — processes that, while imperfect and often critiqued, provide meaningful barriers against the most clearly harmful applications. Whether DeepSeek subjected V4 to equivalent rigor before releasing it under an MIT license with no access controls is not transparent from publicly available information, and that lack of transparency is itself a significant concern for safety researchers and policymakers. A frontier model deployed globally at zero access cost, with no API monitoring infrastructure and no deployment restrictions, creates a dramatically expanded attack surface for malicious applications: large-scale disinformation campaigns, automated social engineering at industrial scale, cyberattack code generation, and sophisticated deepfake production all become more accessible when frontier-grade model weights are freely downloadable. The divergence between Chinese government content standards and Western AI safety frameworks adds another layer of complexity — the safety assumptions and content filters built into DeepSeek V4 may reflect political and regulatory priorities that are inconsistent with Western AI ethics norms, meaning that the model's behavioral boundaries in deployment may differ meaningfully from what Western users, enterprises, and regulators expect.
- Escalating Tech Cold War Threatens Global AI Ecosystem Coherence
DeepSeek V4's success has delivered a pointed message to Washington: export controls have not achieved their stated containment objectives, and the gap between policy intent and actual outcome is now publicly visible in a way that is difficult to ignore or minimize. The predictable governmental response — already visible in early Commerce Department reporting — is escalation: tighter restrictions on Huawei Ascend chips, potential restrictions on AI model exports, and possibly direct legal challenges to DeepSeek's presence in the American market. If this escalation materializes fully, the global AI market fragments into distinct Western and Eastern technology stacks that become increasingly incompatible with each other at the level of standards, regulations, and infrastructure. The businesses and countries caught in the middle — Southeast Asian enterprises, Middle Eastern technology investors, African startups, Global South governments building national AI strategies — face genuinely impossible choices between technology alignment and geopolitical allegiance. This is the AI manifestation of the splinternet dynamic that has been fragmenting the broader internet ecosystem across content moderation, data localization, and platform access, and it carries all the same structural costs: reduced standardization, incompatible regulatory frameworks, duplicated development efforts, and technology adoption decisions driven by political logic rather than technical merit.
- Weakened Frontier R&D Investment Could Stall the Innovation Frontier Itself
The competitive economics created by DeepSeek V4 create a real and underappreciated risk of reducing investment in frontier AI research at exactly the moment when that research is most consequential for the field's long-term trajectory. OpenAI has invested tens of billions of dollars in developing its model capabilities — a capital commitment made possible by investor confidence that the competitive advantage generated by that investment would be durable and commercially defensible over a meaningful time horizon. When the competitive advantage of tens of billions in research and development investment can apparently be replicated — or allegedly cloned through distillation — within 12 months at a fraction of the original cost, the investment thesis for frontier model development is fundamentally challenged at the board and investor level. The rational response is not necessarily to eliminate all foundational research; it is to shift the investment portfolio toward work with faster return cycles and more defensible competitive moats. But if that shift is sufficiently broad and deep across the industry, the organizations doing the most ambitious foundational work may find their funding diminishing precisely as the technical challenges grow most complex and capital-intensive. The structural irony is clear: the faster open-source models close the gap with closed-source frontier work, the weaker the financial case for producing the frontier work that open-source is closing the gap on.
Outlook
In the immediate 1-to-6-month horizon, the most consequential development will be a rapid and painful restructuring of AI API pricing across the board. OpenAI's $30 per million token price for GPT-5.2 is, in my view, commercially unsustainable beyond the next three months — not because the model isn't valuable, but because enterprise buyers have been given a credible alternative that closes to within 1-2 percentage points of performance at 10% of the cost. Reports are already surfacing of Fortune 500 procurement teams evaluating DeepSeek V4 migration timelines, and that evaluation pressure will intensify through the summer. OpenAI's options are constrained: cut prices dramatically and compress already-thin margins, or maintain pricing and watch enterprise customer concentration shift toward DeepSeek and the broader open-source ecosystem. Anthropic faces structurally identical pressure on Claude pricing, and I expect corporate contract renewal rates to show measurable erosion beginning in the second half of 2026.
Google will need to revisit the Gemini pricing structure as well, since the market's price anchor has effectively shifted overnight. The premium model, premium price formula that defined commercial AI's first wave is breaking down, and it is not coming back in its current form. The only viable response for incumbent AI providers is a rapid and credible repricing strategy — and even that may not be sufficient to prevent meaningful customer attrition to open-source alternatives. The economics of this transition will play out faster than most industry observers currently expect, because enterprise procurement cycles, once initiated, move with institutional momentum.
Simultaneously, the legal front will begin its most intensive phase in this short-term window. The IP distillation lawsuit filed by Anthropic and OpenAI will escalate from initial filings toward substantive court proceedings by late 2026, and the ruling will set binding precedents for the entire AI ecosystem — precedents that could reshape how open-source development is practiced globally. If courts classify API-based data collection for model distillation as infringement, the chilling effect on open-source AI development will be significant, since distillation techniques are widely used across the community, not exclusively by DeepSeek. A ruling in DeepSeek's favor, conversely, formally legitimizes a practice that many developers already treat as standard. The jurisdictional complexity amplifies the uncertainty: European and Chinese courts will likely reach different conclusions, accelerating the regulatory fragmentation of global AI development.
For companies building on open-source AI foundations, this legal fog is a genuine operational risk that should inform both deployment strategy and IP protection posture over the next 12-18 months. The prudent organizational response is to maintain clear records of training data provenance, avoid workflows that could be characterized as systematic competitive extraction, and monitor the legal proceedings closely for signals about how courts will interpret the boundaries between fair use, competitive intelligence, and IP theft. These are not abstract legal questions — they have direct implications for whether the open-source AI development model remains viable at scale.
Looking out 6 months to 2 years, the AI industry's fundamental business model faces a structural reckoning that cannot be avoided with incremental adjustments. The charge for API access revenue model — which has been the core monetization engine for every major AI company — breaks down when the underlying model is freely available and access charges fall to $3.48 per million tokens. By mid-2027, I predict AI API pricing will have dropped to 20-30% of current levels, and the revenue mix for surviving AI companies will need to shift accordingly: at least 60% of revenue will have to come from value-added services — fine-tuning, enterprise support, compliance tooling, data pipeline infrastructure, and domain-specific deployment services.
This is precisely the evolution cloud computing underwent when AWS moved from raw server-hour billing to managed platform services — and the companies that failed to make that transition paid dearly. OpenAI is already pushing ChatGPT Enterprise hard. Anthropic is betting on Claude for Teams. Both instincts are directionally correct — the critical question is whether they can execute the pivot at sufficient speed before enterprise customer bases complete their migration calculus. The medium-term window of 12-24 months is when this transition will be decided, and companies that are still debating whether to make the shift will find the decision made for them by market forces.
The medium-term geopolitical dimension may prove more consequential than the business model shift, and it's the scenario I find most difficult to evaluate with confidence. Washington's response to DeepSeek V4's success will almost certainly involve tighter restrictions — the Commerce Department is reportedly already exploring controls on Huawei Ascend chips and potentially extending to AI model exports themselves. If those measures materialize at scale, the global AI market bifurcates into distinct Western and Eastern technology stacks: a US-anchored ecosystem built around OpenAI, Anthropic, and Google, and a China-anchored ecosystem built around DeepSeek, Baidu, and Alibaba. My estimate is that 35-40% of global businesses — particularly those in Southeast Asia, the Middle East, and Africa — will adopt dual-track strategies by 2027, running both stacks simultaneously rather than committing to either geopolitical camp.
This is the internet's splinternet scenario manifesting in AI, with all the associated costs: reduced standardization, fragmented developer ecosystems, and technology adoption decisions driven by political allegiance rather than technical merit or economic efficiency. The long-term consequence is an AI landscape where global collaboration on safety standards, research benchmarking, and ecosystem interoperability becomes progressively more difficult — a fragmentation that ultimately slows the rate of beneficial AI development for everyone.
Looking 2-5 years out, the trajectory points toward full commoditization of frontier AI models — a development that will force a fundamental reimagining of how AI companies justify their valuations. By 2028-2029, frontier-grade AI access will cost close to nothing for most practical applications, the same way Linux made server operating systems effectively free after decades of expensive proprietary licensing. The $3.48 per million token price point is not an endpoint; it's a waypoint on a curve that trends toward near-zero. When that happens, OpenAI's $300 billion valuation faces a profound reckoning. A company's market value cannot rest on model quality as a primary moat if model quality has ceased to be scarce. The analogy that resonates most clearly is Google: Google's value doesn't derive from its search algorithm, which could theoretically be replicated. It comes from the advertising ecosystem, the data flywheel, and the two decades of platform relationships built on top of that algorithm.
The longer view also suggests an AI landscape that evolves toward genuine multipolarity rather than settling into a stable two-camp structure. DeepSeek V4's most consequential global message is straightforward: you do not need the best chips, and you do not need Silicon Valley's capital structure, to compete at the frontier. That message is being heard broadly. The European Union is accelerating investments in Mistral and other regional AI champions. India is building a national AI capability program with serious government backing. Middle Eastern sovereign wealth funds are funding domestic AI ventures at a pace that would have seemed implausible three years ago. By 2029-2030, the AI industry map will look fundamentally different from its current configuration — the United States will likely retain leadership in foundational research, but commercialization and deployment will be captured by a diverse coalition of open-source-powered regional ecosystems.
In terms of scenario analysis, the bull case sees DeepSeek V4 triggering a genuine AI accessibility revolution, global productivity improving by 15-20% annually as frontier AI becomes universally accessible, API prices falling to 5% of current levels by 2028, and the open-source ecosystem absorbing and surpassing every closed-source capability. The base case — which I assign approximately 55% probability — sees a 1-2 year pricing war that eventually reaches equilibrium: OpenAI and Anthropic cut prices by 70% or more while pivoting hard to premium enterprise services, DeepSeek establishes dominance in China and the Global South, and both camps coexist with distinct customer bases and differentiated service offerings. The bear case, which I take more seriously than its probability might suggest, involves IP distillation litigation triggering aggressive regulatory responses across multiple jurisdictions, Washington restricting DeepSeek model usage domestically, the global ecosystem fracturing completely, and the resulting legal and regulatory uncertainty slowing AI adoption across the board.
For decision-makers trying to navigate this transition productively: if you are a CTO or enterprise technology buyer, the responsible action is to run a DeepSeek V4 proof-of-concept now, not next quarter. For applications involving sensitive data, self-hosted open-source deployments on private infrastructure provide a path that avoids both cost exposure and data sovereignty concerns. If you are a developer, active participation in the open-source AI ecosystem — analyzing DeepSeek V4's architecture, running fine-tuning experiments, contributing to community improvements — is likely the highest-ROI professional investment available for the next five years. And if you are an investor, the structural shift argues for redirecting attention from AI model companies to AI infrastructure and tooling companies. When the models themselves commoditize, the durable competitive advantage accrues to companies building the compliance frameworks, safety layers, domain-specific deployment tooling, and data pipeline infrastructure that sits on top of the commodity model layer.
Sources / References
- Why DeepSeek's V4 Matters: Technical Analysis of Huawei Chip Training Success — MIT Technology Review
- DeepSeek V4 Price-Performance Analysis and the AI Market Shock — Fortune
- DeepSeek V4 and the Open Source AI Competition Landscape: Enterprise Reactions — CNBC
- DeepSeek Previews New AI Model That Closes the Gap With Frontier Models — TechCrunch
- DeepSeek Launches 1.6-Trillion-Parameter V4 on Huawei Chips as the U.S. Escalates AI Theft Accusations — Tom's Hardware
- DeepSeek Launches V4 AI Model on Huawei Chips: Geopolitical Implications Analysis — Modern Diplomacy
- DeepSeek V4 Preview: Open-Sourced, 1M Context Breakthrough and 49B Active-Parameter Pro Model — Blockchain News