You’re Worried About the Wrong Chips
Logic without memory is useless.
In February 2026, Micron announced it would wind down its consumer memory business. The only American member of the global memory triopoly was telling smaller buyers, in primo corporate speak, that they would no longer be a priority. Micron’s larger strategic customers—namely the hyperscalers building AI data centers—took precedence.
That decision is a small piece of a much larger reallocation of industrial capacity that is about to hit American households hard. Preeminent corporate research firm Gartner expects combined dynamic random access memory (DRAM) and solid state drive (SSD) prices to rise 130 percent by the end of 2026, pushing PC prices up 17 percent and smartphone prices up 13 percent compared with 2025. It projects PC shipments to fall 10.4 percent and smartphone shipments to drop 8.4 percent in 2026. In spite of the fact that Apple just entered the game with the Mac Neo, Gartner warns that the sub-$500 PC could disappear by 2028.
These are kitchen-table numbers in an era obsessed with data centers. More importantly, they are arriving on a timeline that the current chip strategy was not built to meet.
Washington has spent the last several years learning about logic. Terms like compute, inference, and training have become pervasive, and one would be hard pressed to find a single member of Congress who hasn’t opined at length about the “chips” and datacenters that make this all happen. The vast majority of that conversation has focused on what are known as logic chips: the central processing units (CPUs), graphical processing units (GPUs), and Tensor Processing Units (TPUs) that can be thought of as the brain of a computer or datacenter. These are the pieces that do the thinking; the computing.
In the common conversation, those letters have become stand-ins for a theory of power: whoever controls the best and most logic chips will dominate the future. That picture is incomplete. Logic is crucial, but logic without memory is practically useless. And it is memory, not logic, that is about to show up on constituents’ monthly budgets.
Logic Without Memory
The technical distinction between logic and memory is relatively simple. Logic chips decide what happens next. A CPU runs general instructions, a GPU performs many calculations in parallel, an AI accelerator such as a TPU pushes fancy math through custom circuits, and a microcontroller tells a machine when to turn, brake, transmit, sense, or stop. Memory holds the working set. DRAM stores bits in cells built around a transistor and capacitor and, because the charge leaks, those cells have to be refreshed. Storage is the umbrella term for anything that persists data: flash memory (built on NAND technology) retains it without power in solid-state cells; a traditional hard drive stores it on spinning magnetic platters.
GPU compute capability has outpaced memory bandwidth by orders of magnitude over the last twenty years. Processors have grown roughly 60,000x faster while DRAM bandwidth has improved only about 100x. That gap is what the industry calls the “memory wall.” In the context of AI, model weights, embeddings, intermediate calculations, and training data have to move constantly between processor, memory, and storage. Push too little data through that pipe and the processor stalls. Even today’s best AI training runs hit only 35–50 percent of their GPUs’ theoretical peak FLOPs, mostly because the silicon is waiting on memory rather than computing. The same is true for your average personal computer. With few exceptions, the device you’re currently using is probably utilizing only around 10 percent of its compute power but anywhere from 30 to 70 percent of its memory capacity.
But, if you build enough of the right memory bandwidth around a GPU, the same processor becomes useful. That is why the highest-end AI systems increasingly revolve around high-bandwidth memory (HBM): stacks of advanced DRAM chips vertically interconnected to increase bandwidth and energy efficiency while saving space. Micron’s HBM3E, for instance, comes in 8-high and 12-high stacks and delivers more than 1.2 terabytes per second per placement.
Logic and memory are often lumped together as “chips,” but they are the two complementary pillars of the chip market. In 2025, logic was the largest product category by sales at $301.9 billion. Memory was second at $223.1 billion. This is not a niche piece of tech tucked somewhere behind NVIDIA’s income statement. Treating one as the whole race and the other as a component is how policymakers have previously ended up surprised by shortages that were being priced into contracts months earlier.
The manufacturing businesses are different, too. Leading-edge logic is about transistor density, design complexity, lithography, and the politics of TSMC. DRAM is a scale business built around yield, product mix, and timing a capital cycle that punishes both overbuilding and underbuilding. Besides the use of silicon wafers, Samsung is essentially the only overlap as three companies—Micron in the US along with SK Hynix and Samsung in South Korea—collectively control over 90 percent of the global RAM supply.
Unlike logic chip manufacturing, where China and Taiwan are the thousand pound gorilla in the room, China is secondary in memory. Founded in 2016 after a CCP-backed attempt to acquire Micron failed, ChangXin Memory Technologies (CXMT) is a Chinese DRAM manufacturer that built its early technology base by licensing intellectual property from bankrupt German chipmaker Qimonda. Still, CXMT isn’t to be ignored and has grown rapidly: the company has surged from being negligible to holding 5–7 percent market share by the end of 2025.
As one market watcher put it, CXMT’s focus on capturing the low-to-middle market for legacy PC, smartphone, and smart device chips “[while] the Big 3 are distracted by AI gold,” could have major long-term impacts on the memory market: “The current RAM shortage isn’t just a supply chain hiccup; it is the catalyst for a permanent shift in who owns the memory inside your devices.”
Unlike the logic chip market, Washington has not been asleep at the wheel on CXMT. In 2022, the FY23 National Defense Authorization Act banned the US federal government from purchasing or using CXMT chips. In 2023, members of the House Select Committee on China urged the Commerce Department to place CXMT on the Entity List. By January 2025, the Defense Department had added CXMT to its Section 1260H list of companies allegedly linked to the People’s Liberation Army, though the Pentagon reversed course in February 2026 and removed CXMT from that list. Meanwhile, the Commerce Department’s Bureau of Industry and Security drafted plans to place CXMT on the Entity List alongside SMIC and YMTC subsidiaries, though the timing has been complicated by ongoing US-China trade negotiations. The company has also attracted legal trouble abroad, with Korean prosecutors indicting ten people, including a former Samsung executive, for allegedly transferring trade secrets that helped CXMT mass-produce advanced DRAM.
So the supply base for memory is friendly but thin. In the fourth quarter of 2025, TrendForce put Samsung at 36 percent of global DRAM revenue, SK hynix at 32 percent, and Micron at 22 percent. The problem is that allied capacity can still be scarce, slow to expand, and allocated first to buyers with the deepest pockets. Those deepest pockets belong to the hyperscalers. Microsoft, Google, Meta, Amazon, and OpenAI’s Stargate consortium have committed to multi-year orders that lock in HBM and high-end DRAM capacity through 2027. TrendForce expects AI workloads to consume roughly 20 percent of global DRAM wafer capacity in 2026, and some analysts put data centers at as much as 70 percent of all high-end memory chip production. This trend has left everyone else out in the cold, vying for an ever smaller slice of a seriously supply constrained market upstream from their actual business.
Old Industries Wrapped Around Computers
The consumer version of the squeeze is already visible in the Gartner numbers. We’re seeing fewer cheap devices, longer replacement cycles, and more people holding on to aging hardware. The back-to-school laptop, the family phone replacement, the small-business desktop refresh will all become meaningfully more expensive in 2026, and substantially more expensive in 2027. That is what a memory shortage looks like when it reaches households
That is also what hyperscaler purchasing power looks like from below. The economic problem is allocation. Cloud providers can sign long-term agreements, prepay, and guarantee volume. Smaller buyers buy what is left, often at spot prices and with weaker claims on future supply. Micron has already made the corporate logic explicit by winding down its consumer memory business so it can focus supply on larger strategic customers in faster-growing markets. It’s hard to blame Micron; the move makes perfect business sense. But that doesn’t soften the blow if you’re a mid-sized consumer electronics firm or, more importantly, households, schools, and small businesses that are used to buying cheap devices.
The industrial version will be harder to absorb. The automotive industry is the cleanest case. Cars no longer use memory only for infotainment. DRAM sits inside cockpit systems, advanced driver assistance systems, autonomy functions, sensor processing, and over-the-air update systems. S&P Global warns that the shift in DRAM capacity toward HBM for AI data centers leaves automakers exposed to a shortage that may be less dramatic than the 2021 crisis but more disruptive and longer lasting. The last chip shortage prevented more than 10 million vehicles from being built in 2021. A memory shortage that hits the electronic systems inside modern cars would not need to be that severe to matter. It would land in dealerships during the same time as other consumer price hikes (and midterm elections).
Telecommunications faces the same pressure with less public drama. Routers, Wi-Fi gateways, cable modems, set-top boxes, wireless base stations, optical transport, Open-RAN, and much more network infrastructure still rely heavily on DRAM (specifically DDR4) and similar memory technologies. According to NCTA—the primary trade association for the cable broadband industry—one year ago memory chips accounted for only 3 percent of the cost of a low/mid-range router. Now it’s over 20 percent. AI servers are pulling memory supply away from the ordinary equipment that keeps Americans connected and broadband networks expanding.
Aviation, shipping, logistics, healthcare, and industrial automation are not exempt because they look like older sectors. They have become old industries wrapped around computers. Aircraft and defense systems need high-reliability memory and storage for mission-critical environments. Ports and shipping networks increasingly rely on sensors, smart containers, RFID systems, scanners, and real-time visibility tools. Warehouses run on handhelds, robots, access points, cameras, and local servers. Medical imaging machines process large data streams in real time. The logic vs. memory split does not stay in the data center. It follows every industry that has added software to a physical process.
Washington’s Timing Problem
The capacity numbers show how hard this is to fix quickly. Samsung aims to build 200,000 wafers per month of proprietary 1c DRAM capacity by the end of 2026, about one-third of its total DRAM output, which TrendForce cites at roughly 650,000 to 700,000 wafers per month. The ramp is staged: 60,000 wafers per month by the end of 2025, another 80,000 by the second quarter of 2026, and another 60,000 in the fourth quarter. SK hynix is reportedly pushing toward about 600,000 DRAM wafers per month in the second half of 2026, with its M15X fab starting around 10,000 wafers per month and ramping toward 50,000 by the fourth quarter. Micron’s U.S. plan is even more revealing: roughly $200 billion in manufacturing and R&D, two leading-edge fabs in Idaho, up to four in New York, a modernized Virginia fab, advanced HBM packaging, and a goal of producing 40 percent of its DRAM in the United States.
Those are large numbers that most policymakers have rightly praised. What politicians touting new jobs, factories, and direct investments in their districts rarely mention is that they are also slow numbers.
Fabs arrive in stages. Equipment has to be installed, qualified, tuned, and driven up the yield curve. There are no leading-edge DRAM fabs currently operating in the United States, and the vast majority of production occurs in East Asia. The first Idaho fab isn’t scheduled to begin DRAM output until 2027.
Memory manufacturing cannot be willed into existence by an appropriations press release. A modern DRAM fab is a capital-intensive machine for turning 300mm silicon wafers into billions of tiny capacitors and transistors with absurdly low defect tolerance. HBM then adds another constraint. It does not merely ask for “more RAM.” Applied Materials describes HBM as stacks of advanced DRAM whose density and bandwidth come through 3D packaging, not just ordinary chip scaling, and Lam Research’s HBM guide describes the required connectors as microscopic copper-filled vertical wires that must be aligned across multiple memory layers with extreme precision.
Even the largest producer moves in increments measured in quarters and years. That timeline is a political problem. The price increases Gartner is projecting will land before the first American leading-edge DRAM fab ships volume, and they will land first on the buyers with no leverage in the supply chain: carmakers, telecoms, and medical equipment manufacturers whose products depend on memory they did not preorder in 2024.
Micron’s Virginia project has been described by government officials as a way to onshore 1-alpha DRAM and sustain legacy NOR, NAND, and DRAM production for aerospace, defense, automotive, and industrial uses. That is exactly the capacity the country needs. But that plant won’t be fully operational until 2030 at best, making it small solace for a router manufacturer, carmaker, or defense supplier who needs parts next quarter.
This should change how Washington talks about the next round of semiconductor policy. The first phase focused on leading-edge logic, export controls, and the GPU race. That focus was understandable. Logic chips and AI accelerators are strategic goods. But a strategy built around logic alone leaves households exposed first and the rest of the economy exposed second.
A better policy going forward would ask a different question before spending public money: does this increase capacity across the chip stack, or does it add another headline fab while leaving the next bottleneck untouched? The answer will vary by sector. AI needs HBM and advanced packaging. Automotive and aerospace need long-lifecycle DRAM, NAND, NOR, sensors, analog parts, and microcontrollers that remain available through long qualification cycles. Telecom needs DDR4 supply for ordinary network equipment. Defense industries need secure suppliers and parts that can survive harsh environments. The point is not to have government planners allocate every wafer. It is to stop pretending that the flashiest NVIDIA chip is the whole supply chain.
The United States does need more logic capacity. It needs advanced fabs, better packaging, strong export controls, and a credible path back into leading-edge manufacturing. But the next phase of chip policy should be judged by whether it builds balance, and by whether it arrives in time for the price hike already on its way to constituents.
Logic wins ribbon cuttings. Memory decides how much everything else costs.




