
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more
Cognitive migration is not just an individual journey; it is also a collective and institutional one. As AI reshapes the terrain of thought, judgment and coordination, the very foundations of our schools, governments, corporations and civic systems are being called into question.
Institutions, like people, now face the challenge of rapid change: “Rethinking” their purpose, adapting their structures and rediscovering what makes them essential in a world where machines can increasingly think, decide and produce. Like people who are undergoing cognitive migration, institutions — and the people who run them — must reassess what they were made for.
Discontinuity
Institutions are designed to promote continuity. Their purpose is to endure, to offer structure, legitimacy and coherence across time. It is those very attributes that contribute to trust. We rely on institutions not only to deliver services and enforce norms, but to provide a sense of order in a complex world. They are the long-arc vessels of civilization, meant to hold steady as individuals come and go. Without viable institutions, society risks upheaval and an increasingly uncertain future.
But today, many of our core institutions are reeling. Having long served as the scaffolding of modern life, they are being tested in ways that feel not only sudden, but systemic.
Some of this pressure comes from AI, which is rapidly reshaping the cognitive terrain on which these institutions were built. But AI is not the only force. The past two decades have brought rising public distrust, partisan fragmentation and challenges to institutional legitimacy that predate the generative AI technological wave. From increasing income inequality, to attacks on scientific process and consensus, to politicized courts, to declining university enrollments, the erosion of trust in our institutions has multiple causes, as well as compounding effects.
In this context, the arrival of increasingly capable AI systems is not merely another challenge. It is an accelerant, fuel to the fire of institutional disruption. This disruption demands that institutions adapt their operations and revisit foundational assumptions. What are institutions for in a world where credentialing, reasoning and coordination are no longer exclusively human domains? All this institutional reinvention needs to take place at a pace that defies their very purpose and nature.
This is the institutional dimension of cognitive migration: A shift not just in individuals find meaning and value, but in how our collective societal structures must evolve to support a new era. And as with all migrations, the journey will be uneven, contested and deeply consequential.
The architecture of the old regime
The institutions in place now were not designed for this moment. Most were forged in the Industrial Age and refined during the Digital Revolution. Their operating models reflect the logic of earlier cognitive regimes: stable processes, centralized expertise and the tacit assumption that human intelligence would remain preeminent.
Schools, corporations, courts and government agencies are structured to manage people and information on a large scale. They rely on predictability, expert credentials and well-defined hierarchies of decision-making. These are traditional strengths that — even when considered bureaucratic — have historically offered a foundation for trust, consistency and broad participation within complex societies.
But the assumptions beneath these structures are under strain. AI systems now perform tasks once reserved for knowledge workers, including summarizing documents, analyzing data, writing legal briefs, performing research, creating lesson plans and teaching, coding applications and building and executing marketing campaigns. Beyond automation, a deeper disruption is underway: The people running these institutions are expected to defend their continued relevance in a world where knowledge itself is no longer as highly valued or even a uniquely human asset.
The relevance of some institutions is called into question from outside challengers including tech platforms, alternative credentialing models and decentralized networks. This essentially means that the traditional gatekeepers of trust, expertise and coordination are being challenged by faster, flatter and often more digitally native alternatives. In some cases, even long-standing institutional functions such as adjudicating disputes are being questioned, ignored, or bypassed altogether.
This does not mean institutional collapse is inevitable. But it does suggest that the current paradigm of stable, slow-moving and authority-based structures may not endure. At a minimum, institutions are under intense pressure to change. If institutions are to remain relevant and play a vital role in the age of AI, they must become more adaptive, transparent and attuned to the values that cannot readily be encoded in algorithms: human dignity, ethical deliberation and long-term stewardship.
The choice ahead is not whether institutions will change, but how. Will they resist, ossify and fall into irrelevance? Will they be forcibly restructured to meet transient agendas? Or will they deliberately reimagine themselves as co-evolving partners in a world of shared intelligence and shifting value?
First steps of institutional migration
A growing number of institutions are beginning to adapt. These responses are varied and often tentative, signs of motion more than full transformation. These are green shoots; taken together, they suggest that the cognitive migration of institutions may already be underway.
Yet there is a deeper challenge beneath these experiments: Many institutions are still bound by outdated methods of operating. The environment, however, has changed. AI and other factors are redrawing the landscape, and institutions are only beginning to recalibrate.
One example of change comes from an Arizona-based charter school where AI plays a leading role in daily instruction. Branded as Unbound Academy, the school uses AI platforms to deliver core academic content in condensed, focused sessions tailored for each child. This shows promise to improve academic achievement while also allowing students time later in the day to work on life skills, project-based learning and interpersonal development. In this model, teachers are reframed as guides and mentors, not content deliverers. It is an early glimpse of what institutional migration might look like in education: Not just digitizing the old classroom, but redesigning its structure, human roles and priorities around what AI can do.
The World Bank reported on a pilot program in Nigeria that used AI to support learning through an after-school program. The results revealed “overwhelmingly positive effects on learning outcomes,” with AI serving as a virtual tutor and teachers providing support. Testing showed students achieved “nearly two years of typical learning in just six weeks.”
Similar signals are emerging elsewhere. In government, a growing number of public agencies are experimenting with AI systems to improve responsiveness: triaging constituent inquiries, drafting preliminary communications or analyzing public sentiment. Leading AI labs such as OpenAI are now tailoring their tools for government use. These nascent efforts offer a glimpse into how institutions might reallocate human effort and attention toward interpretation, discretion and trust-building; functions that remain profoundly human.
While most of these initiatives are framed in terms of productivity, they raise deeper questions about the evolving role of the human within decision-making structures. In other words, what is the future of human work? The conventional wisdom viewpoint voiced by futurist Melanie Subin in a CBS interview is that “AI is going to change jobs, replace tasks and change the nature of work. But as with the Industrial Revolution and many other technological advancements we have seen over the past 100 years, there will still be a role for people; that role may just change.”
That seeming evolution stands in stark contrast to the poignant prediction from Dario Amodei, CEO of Anthropic, one of the world’s most powerful creators of AI technologies. In his view, AI could eliminate half of all entry-level white-collar jobs and spike unemployment to 10 to 20% in the next 1 to 5 years. “We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” he said in an interview with Axios. His draconian prediction could happen, although perhaps not as quickly as he suggests, as diffusion of new technology across society can often take longer than is expected.
Nevertheless, the potential for AI to displace workers has long been known. As early as 2019, Kevin Roose wrote about conversations he had with corporate executives at a World Economic Forum meeting. “They’ll never admit it in public,” he wrote, “but many of your bosses want machines to replace you as soon as possible.”
In 2025, Roose reported that there are signs this is beginning to occur. “In interview after interview, I’m hearing that firms are making rapid progress toward automating entry-level work, and that AI companies are racing to build ‘virtual workers’ that can replace junior employees at a fraction of the cost.”
Across all institutional domains, there are green shoots of transformation. But the throughline remains fragmented, merely early signals of change and not yet blueprints. The deeper challenge is to move from experimentation to structural reinvention. In the interim, there could be a lot of collateral damage, not only to those who lose their jobs but also to the overall effectiveness of institutions amidst turmoil.
How can institutions move from experimentation to integration, from reactive adoption to principled design? And can this be done at a pace that adequately reflects the rate of change? Recognizing the need is only the beginning. The real challenge is designing for it.
Institutional design principles for the next era
If AI acceleration continues, this will lead to immense pressure on institutions to respond. If institutions can move at pace, the question becomes: How can they move from reactive adoption to principled design? They need not just innovation, but informed vision and principled intention. Institutions must be reimagined from the ground up, built not just for efficiency or scale, but for adaptability, trust and long-term societal coherence.
This requires design principles that are neither technocratic nor nostalgic, but grounded in the realities of the migration underway, based on shared intelligence, human vulnerability and with a goal of creating a more humane society. That in mind, here are three practical design principles.
Build for responsiveness, not longevity
Institutions must be designed to move beyond fixed hierarchies and slow feedback loops. In a world reshaped by real-time information and AI-augmented decision-making, responsiveness and adaptability become core competencies. This means flattening decision layers where possible, empowering frontline actors with tools and trust and investing in data systems that surface insights quickly, without outsourcing judgment to algorithms alone. Responsiveness is not just about speed. It is about sensing change early and acting with moral clarity.
Integrate AI where it frees humans to focus on the human
AI should be deployed not as a replacement strategy, but as a refocusing tool. The most forward-looking institutions will utilize AI to absorb repetitive tasks and administrative burdens, thus freeing human capacity for interpretation, trust-building, care, creativity and strategic thinking. In education, this might mean AI-created and presented lessons that allow teachers to spend more time with struggling students. In government, it could mean greater automated processing that gives human staff more time to solve complex cases with empathy and discretion. The goal should not be to fully automate institutions. It is instead to humanize them. This principle encourages using AI as a support beam, not a substitute.
Keep humans in the loop where it matters most
Institutions that endure will be those that make room for human judgment at critical points of interpretation, escalation and ethics. This means designing systems where human-in-the-loop is not a checkbox, but a structural feature that is clearly defined, legally protected and socially valued. Whether in justice systems, healthcare or public service, the presence of a human voice and moral perspective must remain central where stakes are high, and values are contested. AI can inform, but humans must still decide.
These principles are not meant to be static rules, but directional choices. They are starting points for reimagining how institutions can remain human-centered in a machine-enhanced world. They reflect a commitment to modernization without moral abandonment, to speed without shallowness or callousness and to intelligence shared between humans and machines.
Beyond adaptation: Institutions and question of purpose
In times of disruption, individuals often ask: ‘What was I made for?’ We must ask the same of our institutions. As AI upends our cognitive terrain and accelerates the pace of change, the relevance of our core institutions is no longer guaranteed by tradition, function or status. They, too, are subject to the forces of cognitive migration. Like individuals, their future must include decisions about whether to resist, retreat or transform.
As generative AI systems take on tasks of reasoning, research, writing and coordination, the foundational assumptions of institutional authority including expertise, hierarchy and predictability begin to fracture. But what follows cannot be a hollowing out, because the fundamental purpose of institutions is too essential to abandon. It must be a re-founding.
Our institutions should not be replaced by machines. They should instead become more human: More responsive to complexity, anchored in ethical deliberation, capable of holding long-term visions in a short-term world. Institutions that do not adapt with intention may not survive the turbulence ahead. The dynamism of the 21st century will not wait.
This is the institutional dimension of cognitive migration: A reckoning with identity, value and function in a world where intelligence is no longer our exclusive domain. The institutions that endure will be those that migrate not just in form, but in soul, crossing into new terrain with tools that serve humanity.
For those shaping schools, companies or civic structures, the path forward lies not in resisting AI, but in redefining what only humans and human institutions can truly offer.
Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.
Source link