It’s Cybernetics All The Way Down
Wiener, Ashby, and McCulloch saw our current dilemmas coming. We just stopped citing them.
In 1951, Norbert Wiener proposed a new science of control and communication. He called it cybernetics. The word sounds clinical, almost sterile, but the ambition of the nascent field was sweeping. Cybernetics was about machines and man, but it was also about regulation, adaptation, and equilibrium. It was about how organisms persist, how organizations survive, and how systems maintain themselves in the face of disturbance.
In the mid-century, this inquiry drew together mathematicians, engineers, biologists, and anthropologists. They met in seminar rooms and research labs to map the logic of feedback, but beneath the hard science were deeper questions. How do you steer something that reacts to being steered? When does intervention dampen instability? When does it amplify it?
For a time, cybernetics promised a unifying framework for understanding complex systems. Then the movement faded. Its conferences dissolved, its grand institutional ambitions dissipated, and its vocabulary slipped quietly into other disciplines, namely economics, computer science, ecology, and management theory. The field receded. The ideas did not.
Seventy years later, legislators ask whether artificial intelligence systems can be aligned with human intent. Regulators debate whether recommendation algorithms amplify social instability. Economists warn that digital markets tip irreversibly under the pressure of network effects. The language feels native to the digital age; we speak of runaway optimization and systemic risk as though these were new discoveries.
They are not.
What is new is our apparent belief that we are improvising. Contemporary tech policy is usually narrated as a sequel to the 1990s, an overdue reckoning with the unbridled optimism of the early dot-com era. That story is simple, neat, and often politically convenient. To be generous, it suggests that our dilemmas are recent and that modest regulatory adjustment might resolve them. To be more cynical, to some it means that we’ve been right all along and changing the rules now will only result in disaster. To others, it means we got it all wrong around the turn of the century and nothing short of extirpation of techno-libertarianism will save us from calamity.
But the intellectual scaffolding of our current debates was erected decades earlier in the cybernetic effort to understand how complex systems can be governed without destabilizing them. We are rediscovering, in the language of AI safety, platform governance, and network architecture, problems that were once central to an entire interdisciplinary movement.
The peril lies not in forgetting specific names such as Wiener, Ashby, and McCulloch, but in forgetting that these problems have a history. Cybernetics wrestled openly with the limits of control, with second-order effects, with the danger of overcorrection. It treated governance as an engineering problem without reducing it to mechanical simplicity. In neglecting that tradition, we risk approaching adaptive digital systems as though they were static machines by crafting fixed rules for entities defined by feedback. We also risk simplifying our view such that we reject potential solutions simply because of our frame of reference.
To see our present clearly, we must recover the moment when thinkers first confronted the paradox of governing systems that learn, respond, and evolve. The alternative is to repeat their questions without the benefit of their insights.
It is cybernetics all the way down.
The Forgotten Framework
Cybernetics emerged in the 1940s and 1950s as an interdisciplinary effort to understand control and communication in animals and machines. Norbert Wiener, who coined the term, derived it from the greek kubernētēs (meaning helmsman or pilot) and latin gubernator (meaning steersman, governor, or ruler). In defining his new field of study, Wiener wanted a term that captured the idea of steering systems—whether mechanical, biological, or social—through feedback and control mechanisms. A helmsman does not dictate the sea; he continuously and dynamically adjusts to currents, winds, and disturbances. Thus, cybernetics describes the study of control and communication in complex systems. From its basis in mathematics, cybernetics quickly expanded to encompass engineers, biologists, mathematicians, anthropologists, and organizational theorists.
The core insight was simple but profound: complex systems maintain stability through feedback. A thermostat measures temperature and adjusts output accordingly. The human body regulates glucose levels through hormonal signaling. Organizations adapt to environmental change by processing information and updating behavior.
Psychiatrist Ross Ashby formalized the idea that regulators must possess sufficient internal complexity to manage the systems they govern. His “Law of Requisite Variety” held that only variety can absorb variety: to stabilize a complex environment, a regulator must match its degrees of freedom. Operations theorist Stafford Beer applied these ideas to management and statecraft, proposing architectures for governing large-scale organizations and even national economies. Anthropologist Gregory Bateson pushed further, arguing that observers themselves are embedded in systems, a move toward what came to be called second-order cybernetics.
The field’s vocabulary—feedback loops, homeostasis, adaptation, control—never disappeared. It diffused. It migrated into economics through general systems theory and information theory, as well as ecology, management science, and eventually into computing and network theory. By the time the commercial internet emerged, its architecture and vernacular already reflected cybernetics. When we describe platforms as “ecosystems” or worry about “runaway amplification,” we are speaking a cybernetic language whether we realize it or not.
The Misleading 1990s Story
The internet did not invent this way of thinking. It inherited it.
By the time the commercial web emerged in the 1990s, cybernetics as a field had largely dissolved. Its conferences were over; its grand institutional ambitions had receded. But its assumptions had already seeped into adjacent fields. The vocabulary shifted. The structure remained.
The early internet’s governing ethos is often described as libertarian: decentralized networks, permissionless innovation, self-organizing communities, markets over mandates. Section 230 becomes the emblem of this moment—an institutional bet on minimal ex ante control.
But even this posture was saturated with cybernetic logic. Markets were celebrated not as static equilibria but as information-processing systems. Price signals are feedback mechanisms. Open networks are resilient precisely because they distributed control across nodes capable of local adaptation. Self-governance is regulation emerging from decentralized feedback.
The real wager of the 1990s was not that complex systems require no regulatory steering. It was that they could steer themselves provided that the feedback loops were sufficiently open and the architecture sufficiently distributed.
Seen this way, today’s debates are not a clean break from an earlier naïveté. They are a dispute over the adequacy of that original feedback architecture. Whether they say it explicitly or not, critics of the current tech moment are arguing that certain loops—engagement optimization, network effects, data accumulation, etc.—have become destabilizing. Defenders counter that heavy-handed intervention risks distorting adaptive processes that still generate value.
Both sides are arguing within the same conceptual frame. The disagreement concerns how feedback should be structured, not whether feedback governs the system.
The 1990s operationalized cybernetics. Our present moment is less a repudiation of that framework than a recognition that steering large-scale digital systems may require more layered and deliberate forms of control than early internet idealists anticipated.
AI Alignment as a Control Problem
Consider AI alignment. Strip away the rhetoric about existential risk or superintelligence, and what remains is a classic cybernetic question: how do you design a system that reliably pursues intended goals in a dynamic, partially observable environment?
Large language models are trained through feedback loops such as gradient descent, reinforcement learning from human preferences, and evaluation loops. Policymakers now debate whether additional feedback mechanisms like external audits, red-teaming, usage monitoring, and incident reporting are necessary to stabilize model behavior.
This is first-order cybernetics: improving the control system to better regulate the target system.
But alignment debates quickly drift into second-order territory. Regulators are themselves embedded in the system. Their interventions alter incentives, which change model development trajectories, which reshape the environment regulators must manage. Calls for compute thresholds, licensing regimes, or model registries are attempts to introduce higher-level control loops. In the terminology of cybernetics, these are meta-regulators overseeing regulators overseeing models.
The disagreement between “pause AI” advocates and “accelerate with guardrails” advocates is not a clash between fear and optimism. It is a disagreement over how much feedback, and at what layer, is necessary to maintain system stability.
That is a cybernetic dispute.
Content Moderation and Homeostasis
The same structure appears in platform governance. Social media platforms operate as large-scale feedback systems. Users produce content, algorithms amplify based on engagement signals, engagement alters user behavior, and behavior then reshapes content production.
Critics argue that engagement-maximizing algorithms create positive feedback loops that amplify extremism or misinformation. Defenders argue that excessive intervention disrupts organic community dynamics and suppresses legitimate speech. Both sides implicitly agree on the underlying model: platforms are systems whose outputs depend on feedback dynamics. The dispute concerns how to tune the regulator.
Should moderation be centralized or distributed? Should platforms rely on automated filters or community norms? Should governments impose constraints on platforms, effectively inserting a new control layer?
These are homeostatic questions. The goal is not perfection but stability within tolerable bounds. Too little intervention and the system destabilizes. Too much and it ossifies or collapses under rigidity. The language of “trust and safety” sounds moralistic. Structurally, it is managerial cybernetics applied to digital ecosystems.
Antitrust and Network Dynamics
Antitrust in the digital age has also adopted a cybernetic frame. Traditional antitrust focused on static measures: price, output, and market share. Contemporary debates revolve around network effects, tipping points, self-reinforcing dominance, and path dependence.
Platforms become dominant because feedback loops lock in users. More users attract more developers and more developers attract more users. Data accumulation enhances service quality which attracts still more users.
Reformers argue that such dynamics justify structural interventions to disrupt runaway feedback: interoperability mandates, data portability, and corporate breakups. Skeptics warn that intervention may destabilize beneficial equilibria and undermine innovation. Again, this is not merely an economic argument. It is a dispute about how to manage feedback in complex networks. The question is whether to dampen positive feedback loops, inject negative feedback, or redesign the system’s architecture altogether.
Cybernetics provided the conceptual vocabulary for understanding self-reinforcing systems decades before digital platforms existed. Today’s antitrust debates simply apply that vocabulary to new substrates.
First-Order vs. Second-Order Governance
The most revealing fault line in tech policy is not libertarian versus conservative, nor innovation versus safety. It is first-order versus second-order governance.
First-order governance assumes the regulator stands outside the system. It focuses on correcting specific failures such as misinformation, monopoly pricing, or biased outputs. The regulator measures deviations and adjusts inputs accordingly.
Second-order governance recognizes that regulators are themselves part of the system. Interventions reshape incentives, information flows, and power distributions in ways such that governance becomes recursive.
Debates over algorithmic transparency illustrate this tension. First-order logic suggests that more visibility improves oversight. Second-order logic asks how actors will adapt once visibility changes strategic behavior. Will platforms game metrics? Will bad actors exploit disclosed vulnerabilities? Will transparency itself alter the dynamics it seeks to monitor?
Once one adopts a second-order perspective, static rulemaking appears insufficient. The emphasis shifts toward adaptive institutions such as regulators capable of learning, updating, and responding in real time. This is Ashby’s Law applied to governance: only a regulator with sufficient variety can absorb the complexity of the system it oversees.
The Policy Payoff
Recognizing the cybernetic lineage of tech policy debates is not an exercise in intellectual archaeology. It clarifies what kind of institutional design problems we actually face.
If digital platforms and AI systems are complex adaptive systems governed by feedback, then static, one-shot rules will often misfire. Policymakers should think less in terms of fixed prohibitions and more in terms of dynamic control architectures.
That may mean building agencies with technical capacity and iterative oversight authority. It may mean designing regulatory sandboxes that allow feedback between innovators and regulators. It may mean embedding measurement and evaluation mechanisms directly into governance frameworks.
It also counsels humility. Cybernetic systems are notoriously difficult to control. Overcorrection can destabilize as easily as neglect. The goal is not to eliminate feedback but to tune it.
Finally, it reframes political disagreements. Many apparent ideological clashes mask shared assumptions about systemic risk and control. Both decentralization advocates and centralization advocates seek stability. They differ over which control architecture best achieves it.
The cybernetic perspective reveals that we are not arguing about whether to govern technology. We are arguing about how to design feedback loops that keep sprawling digital systems within acceptable bounds.
Looking Forward
In the mid-century, cybernetics promised a unifying science of systems. Its ambitions exceeded its lifespan, but its conceptual tools proved durable. They seeped into computer science, economics, and a plethora of other fields. They now structure the way we think about AI, platforms, and digital markets.
As artificial intelligence systems grow more autonomous and digital networks more entangled with physical infrastructure, the stakes of cybernetic governance increase. We are not merely regulating firms or products. We are managing adaptive systems that learn, respond, and evolve.
Understanding this lineage does not solve our policy dilemmas. It does, however, clarify them. It suggests that the central challenge of tech policy is not choosing between freedom and control, or innovation and safety. It is designing viable control systems for increasingly complex socio-technical networks.
Once you see it, the pattern becomes difficult to unsee. Feedback everywhere. Control layered upon control. Observers embedded in the systems they seek to steer.
It’s cybernetics all the way down.




