Collective Strategy: A Framework for Solving Large-Scale Social Problems

Research Brief  |  Issue No. 1  |  January 8, 2018 

By Robert D. Lamb, PhD
bob@foundationforinclusion.org

What does it take to solve really hard problems?

Chronic poverty, cycles of violence, racial and ethnic mistrust, global trafficking, tensions over migration, environmental damage—many of today’s biggest challenges seem immune to even the most heroic efforts to resolve them. In divided societies, progress has proven to be reversible, even when hundreds of organizations, thousands of people, and millions of dollars are dedicated to permanent solutions.

Are problems becoming harder, or is our ability to work together to solve them becoming weaker?

I’ve spent most of my career thinking about how societies change—especially how they fall apart or come together—and how some people manage to influence that change while others fail. This policy brief summarizes my most recent research (some of which is ongoing) and describes methods I have developed to determine what it would take to achieve large-scale change on any topic—what would need to change (by how much), who can change them (by how much), and what’s missing. It’s a good framework for judging the chances of success—or improving the odds.

The Foundation for Inclusion (FFI) is applying these methods today while continuing research that will make them even more powerful in the future.

Problem-Solving Systems

Everyone and everything that affects a particular problem is called a system, ecosystem, or social-change ecosystem (the “development system” for poverty reduction, the “freedom ecosystem” for human trafficking, etc.). Anyone making a problem worse is part of that ecosystem. So is everyone who actively works to make things better—but they don’t always think of themselves that way. Many think of themselves as visitors fixing the problem, the way a plumber fixes your faucet but doesn’t move in with you.

That’s not a bad way to think about simple problems (a family living on the street is cold, so you give them a blanket). But more multidimensional problems tend to get divided into smaller problems that can be dealt with separately (homelessness as such is addressed piecemeal—a shelter, a clinic, a thrift store, legal services, housing advocacy, etc.). Yet solving the pieces doesn’t usually solve the problem—and sometimes makes it worse.

Why can’t a complex problem be solved just by dividing it up into solvable pieces?

Because those “solvable pieces” tend to interact with each other in complex ways, creating vicious cycles, domino effects, causal delays, and other strange behaviors. That makes it hard to predict how a change in one part of the ecosystem will affect any other part (rent control makes housing more affordable, but homelessness gets worse). Then, into this already complex ecosystem, all the problem solvers show up to take on different “solvable pieces” of the larger problem. The more problem solvers there are in an ecosystem, the more complex their interactions might become. And the overall problem will get even more complex and therefore less likely to be solved.

Happily, more and more problem solvers—governments, foundations, charities, contractors, development agencies, volunteers, investors, etc.—now recognize that uncoordinated efforts against complex problems are counterproductive. Some, therefore, form multistakeholder partnerships and organize for collective impact. That can keep added complexity to a minimum.

But this comes with its own set of challenges, not the least of which is the diversity of organizational cultures, processes, data practices, and relationships that can complicate their collective ability to make and implement good decisions.

In research I did two years ago, I called this the “dual-system problem.” The kinds of issues we’re talking about here—conflict, racism, trafficking—are complex, in the technical sense of being produced by complex systems. But the organizations that solve these problems are usually embedded in their own, separate ecosystems as well—systems designed to make and implement decisions about how to solve a wide variety of problems. And many of these problem-solving ecosystems, it turns out, are also complex systems and therefore equally difficult to navigate.

I first observed the dual-system problem in my work on countries experiencing complex conflicts. Researchers had repeatedly identified a set of best practices for providing aid in such environments. Yet international donors had repeatedly failed to institutionalize those practices, and their record of success was dismal. The international aid delivery system had so many different players, interests, cultures, procedures, frameworks, regulations, and data practices that nobody—from decisionmakers at the top to implementers in the field—had any idea what their own systems were actually capable of delivering.

That means any solution that might be proposed for a particular problem has to fight its way through not one but two complex systems: first, the problem-solving system, where it’s not guaranteed the solution will be implemented as decided, and second, the problem system, where it’s not certain how the implemented solution will actually affect the outcome.

The dual-system problem teaches us something that should be obvious but is less trivial an observation than it seems: large-scale social problems will not get solved if the problem-solving system is not up to the task. Our focus therefore needs to shift from solving problems to fixing problem-solving systems.

Problem-Solving Capabilities

But why are some problem-solving systems up to the task of solving hard problems while others are not?

Problem-solving systems can be as simple as a committee or as complicated as artificial intelligence. A policy system is a problem-solving system designed to solve collective action problems. A computer solves mathematical and logical problems. A business solves problems for its customers, as a charity might for its community. Organ systems in mammals solve problems like removing toxins.

These all might operate at different scales and in dramatically different domains. But they all are problem-solving systems.

What can people who want to make the world better learn from comparing these systems with each other?

To answer this, I developed a framework describing six levels of problem-solving capability, each level corresponding to the sophistication of the system, along with examples of problems, solutions, and tools relevant to each level:

  • Level 1 (linear)—static electricity, communication, theft, force, algebra
  • Level 2 (dynamic)—temperature control, analog circuits, species diversity, blackmail, nonlinear algebra
  • Level 3 (complex)—digital circuits, industrial robots, rigid bureaucracies, most coalitions, weather, algorithms
  • Level 4 (adaptive)—autonomous robots, bee colonies, human social units, agile startups, children, heuristics
  • Level 5 (intelligent)—human individuals, artificial intelligence, metaheuristics
  • Level 6 (conscious)—human mind

There’s more than meets the eye in this framework. What it shows is not that some problems are harder to solve than others but rather that some problems are fundamentally unsolvable by lower-level problem-solving systems. A complex problem (Level 3) simply cannot be solved using linear thinking (Level 1).

It also adds an important complication to dual-system problems. It’s hard enough when the problem system and the problem-solving system are at the same level. This research shows that the dual-system problem is fundamentally insurmountable for higher-level problems. The international aid system is at best a low Level 4 (adaptive) system (and often a Level 3). No wonder, then, that it has so few successes in conflict environments, most of which today likely require Level 5 (intelligent) solutions.

Even worse, when more problem solvers take on different aspects of a problem without coordinating, they increase the complexity of the problem-solving system and thereby decrease their collective problem-solving capacity. The harder we try, the worse we make things!

We seem, then, to have a paradox. Large-scale social problems are so difficult to solve that we need to break them down into smaller, more solvable pieces. But due to the nature of complex problems, solving those individual pieces will probably never lead to collective success and might well undermine the collective capability to solve the large-scale problem everyone was aiming for in the first place.

This is why civil wars are becoming more persistent, responsible climate policies have been so difficult to enact, cycles of poverty and hate persist for generations, and equal opportunity remains a distant dream for many: According to my own research, humanity’s problem-solving systems are fundamentally incapable of solving these problems.

It took me a year to find the loophole.

Upgrading Problem-Solving Systems

This is solvable.

If everyone builds auto parts but there’s no factory, there won’t be any cars. You can’t launch and land a rocket if you don’t know how the navigation, propulsion, and other subsystems fit together. A global communication network is more than a bunch of interconnected devices.

Each component and subsystem in complex engineered systems like these is designed and built by a team that is structured to have the capability level needed to solve the problem at hand—much like fact-checkers are not taking on the problem of eradicating racism, only the sub-problem of correcting racially biased information. Others are working on the other subsystems—lobbying for justice reforms, or countering media stereotypes (or upgrading the car battery, or redesigning the brake system). But even if they all are individually effective, they won’t be collectively successful without one more thing. In systems engineering, the person who makes all the components and subsystems work together as a unit is called a systems integrator.

There is no systems integrator for social change. So I decided to build one.

Individual humans are Level 6 (conscious) systems. The most sophisticated machines are Level 5 (intelligent) systems, nearing human levels of intelligence. But human collectives (organizations, coalitions, communities, societies) aren’t as sophisticated problem solvers as human individuals (or intelligent machines), because the humans making them up are capable of disagreeing with each other, whereas organ systems and machine components tend to interoperate as integrated wholes. Collectives, therefore, are at best Level 4 (adaptive) systems—and much more commonly act like Level 3 (complex) systems.

Humanity needs an upgrade!

Within the capability framework are clues to how problem-solving systems can be upgraded. What does it take to build a machine that is adaptive (Level 4) rather than merely complex (Level 3)? What about intelligent (Level 5)?

  • A complex circuit (Level 3) needs some memory and an algorithm—detailed instructions to follow to achieve its goal.
  • An autonomous robot (Level 4) needs memory to keep track of progress toward its goal and has access to a heuristic, or rule of thumb, it can follow when something unexpected happens.
  • An intelligent machine (Level 5) has a lot of memory to track successes and failures so it can learn the best way to adapt over time, and it can use a metaheuristic, or the ability to discover its own heuristics, to figure out how to solve new problems in the future.

At each step, the machine becomes more capable, not just by acquiring more useful things—circuitry, memory, data, code—but by making those things more accessible to more system components that might need it. There’s more to it than that, obviously. But this can help us start imagining what our human problem-solving systems might look like if we could upgrade our collective efforts to Level 4 and someday to Level 5:

  • A Level 4 (adaptive) coalition would have shared goals that every organization and individual in the coalition knows and agrees with, and the entire coalition would be able to move toward those goals moment by moment, overcoming challenges as they arise based on a common and precise understanding of the current state of affairs and some reasonable collective-decision heuristic (policies, procedures, common sense, shared narratives, etc.) that everyone has equal access to.
  • A Level 5 (intelligent) coalition would do all that plus consistently learn from mistakes, from history, and from all available data sources; stick with long-term plans in the face of short-term challenges; and make shared sacrifices for the sake of the long-term collective good. Everyone would not only have knowledge of the shared goal but also understand the capabilities, needs, and desires of everyone else in the system. And they would be able to make collective decisions about what to do right now based on optimal results for all.

We are still a ways off from having the technology—and the trust in technology—that we’ll need to turn human collectives into intelligent (much less conscious) problem-solving systems. But we can use existing tools—and we can build the platforms—to start making immediate progress with those coalitions and organizations that are ready for an upgrade to the high end of Level 4 (adaptive).

Finding Hidden Structures

I started FFI a year and a half ago while some of this research was ongoing. But preliminary results gave me confidence that an upgrade was possible: methods existed to synthesize knowledge and experience about social problems into strategies to address them; techniques existed to map out who has influence in society; and tools existed to scale up influence. I was convinced those methods, techniques, and tools could be combined into new processes and platforms that could help problem solvers become collectively more effective. So I started an organization to put it into practice.

A conceptual insight, common among systems thinkers and design thinkers, connected that conviction to what eventually became a practical method for making it happen: the difference between emergent systems and designed systems. All systems have a structure—the particular way their components and subsystems fit together—but in some the structure is hidden:

  • In designed systems—like factories or rockets—you start with the outcome you want to achieve and work backwards toward the structure needed to produce it. The structure is explicit.
  • In emergent systems—like weather or ant colonies—there is no designer: the outcome emerges organically as the different parts encounter each other and react in whatever way is in their nature. The structure is implicit—unwritten and sometimes mysterious.

No engineer would design a society to produce civil wars. But because it is in the nature of humans to self-identify with some people and not others, interactions between different groups often end up settling into patterns that produce willful misunderstanding and dehumanization. Mass violence emerges from those accidental structures. That’s why civil wars are so hard to resolve: the structure producing it is implicit. How can you influence a social system whose structure is hidden? A Level 3 organization will only ever see the Level 3 structures within a Level 4 problem—never the full Level 4 structure.

Of course the real world isn’t so simple. Social and political systems often look like they are designed: we have laws, constitutions, and institutions that formally structure our social relationships. But problem solvers often forget that the underlying social relationships are still messy and emergent; certainly they are influenced by the formal structures, but people interact with each other in ways that are driven by values, facts, emotions, narratives, and norms that emerge organically as well. All human collectives are hybrid systems with both formal (designed) and informal (emergent) structures. Focusing on policy change and not broader structural change almost guarantees that progress against large-scale social problems will eventually be reversed.

What this conceptual insight tells us is that if you want to solve a big problem involving human beings, you need a way to discover the real underlying structure of the system producing that problem. And you need to find a way to redesign the problematic emergent structures.

This isn’t social engineering. This is more like reverse engineering—studying a social system to discover its implicit structure so we can understand how it keeps producing these problematic outcomes.

From there it’s natural to imagine a framework for an actionable strategy to solve the problem. We would need:

  1. A way to see the whole system at once, including everyone and everything affecting the problem (or the solution).
  2. A way to discover how the parts of the system work together to produce the problem (or the solution).
  3. A way to fill the gaps between how the system currently functions (producing problematic outcomes) and how the system would ideally function to produce more desirable outcomes.

This whole-parts-gaps framework became the basis for how I structured FFI. The Collective Strategy division is the systems integrator, focused on figuring out how to fix whole problem-solving systems, while the “parts” and “gaps” divisions—Impact Accounting and the Inclusion Incubator—help to refine, implement, and monitor the progress of collective strategies.

To make progress in the real world, it remains only to turn this framework into a method for building feasible collective strategies.

Seven Questions to Collective Strategy

To make such a method useful, it needs to have certain features. As the whole-parts-gaps framework shows, the method must be capable of producing digestible information about the whole system; useful details about every part of it (everyone and everything that significantly affects the problem); and some way of identifying the gap between the problematic structure of the system as it is today and the ideal structure of the system as it would be if it were producing the desired outcome.

Because the problems we’re talking about affect millions or even billions of people, there is a pressing need to make progress on them as soon as possible—which means the method needs to use existing tools and techniques combined in novel ways. It’s fine if the method is implemented through a labor-intensive managed service in the short term (that’s what FFI is for). But ideally it will also be amenable to automation so that, over time, it can be turned into a self-service platform (which we plan to develop) so in the future people can upgrade their problem-solving capabilities themselves.

Existing collective-impact frameworks are useful starting points. Two dozen nonprofits and government agencies working separately to free enslaved people from trafficking networks will collectively produce some set of outcomes. But it will be hard to say exactly what those outcomes will be or if they will, on the whole, be positive or sustainable. By working separately, these changemakers inadvertently increase the complexity of the problem-solving structures within trafficking ecosystems—and therefore the likelihood their collective outcomes will be emergent rather than designed. But by coordinating their work against a common set of measurable goals—their desired collective impact—multistakeholder partnerships, well-managed coalitions, and other collective-impact groups can reduce their collective contribution to complexity and increase their ability to predict and influence the outcomes of their collective efforts.

But a collective-impact approach still isn’t enough to solve really contentious problems, because by nature it is focused only on fixing one problem-solving structure within the broader social-change ecosystem. Participants might be guided by a broader theory of change—statements linking their work to the outcomes they want to achieve—but most theories of change tend to reflect linear thinking (Level 1), and even Level 2 thinking isn’t remotely good enough.

Theories of change—sometimes called theories of success, or simply strategies—can be much more sophisticated. The recent recognition that systems thinking has received (as being more useful than linear thinking) is well deserved. Systems mapping makes it possible to explore how all the factors that significantly affect a problem are interrelated. And system dynamics methods can produce mathematical models of those interrelated factors, useful for running simulations to test out strategies. Both are excellent tools for mapping out full ecosystems.

But system models tend to focus on factors—processes, variables, constants, stocks, flows, etc.—rather than on actors. For a strategy to be actionable, we need to know who is influencing the problem (for better or worse). Influence mapping focuses more on actors than factors. Network analysis and agent-based modeling do the same but, like system dynamics, can also show mathematically how interactions between parts of a system can produce system-level outcomes.

All of these methods are designed to analyze Level 3 (complex) problems, but their best implementations can put them at Level 4 (adaptive), and some, like adaptive agent modeling, are designed specifically for Level 4 problems.

It would be useful to have a method designed specifically to emphasize factors and actors at the same time, to produce something like a system dynamics model in which each factor is somehow linked to data about who has influence over that factor. There have been workarounds to accomplish this in the past. But a recent innovation simplifies the task. Entity-based system dynamics is a very new method—capable software has been available for less than a year—but it combines features of both dynamic and agent modeling. Out of the box it is capable of solving problems at the high end of Level 4 (adaptive); in the hands of brilliant modelers it might breach Level 5 (intelligent) within a few years. And if it can be combined with machine learning techniques to automate module- and model-building, it has the potential to become a solid Level 5 method.

Drawing on these approaches—and inspired by methods for agile system dynamics, business intelligence, data science, program evaluation, strategy dynamics, systems integration, and other resources—I developed a method that incorporates these requirements in a novel way.

The method is structured around seven questions (7Q) designed to produce the knowledge needed to build collective strategies for high-end Level 4 or low-end Level 5 problems. If the most influential actors in a problem-solving system can co-produce that knowledge, orient their operations around it, regularly update it, and have it fully accessible in real time, this method has the potential to produce the kind of the adaptive (Level 4) coalitions we’re aiming for (see p. 5). (Intelligent coalitions will need to wait for more capable and binding collective-decision systems.)

Each question has one or more specific methods that can be used to answer it today (with more powerful methods replacing them in the future). Simplified, the questions (and illustrative methods) are:

  • Q1 (goal)What are you trying to achieve? participatory goal-setting
  • Q2 (indicators)How will we know it’s been achieved? collective impact
  • Q3 (barriers)What are the main barriers? group model building
  • Q4 (factors)What factors affect those barriers? systems mapping
  • Q5 (model)How are all the factors and barriers related? system dynamics
  • Q6 (strategy)What are the clearest paths to success? strategy dynamics
  • Q7 (influencers)Whose work affects which factors? influence mapping

The goal of the 7Q method is not simply to produce research but to produce an actionable collective strategy and problem-solving platform that can act as a systems integrator.

There are two crucial points about this method that are not captured in the simplified version of the questions:

  • It must be participatory. Collective strategies have to be built collectively. The expertise of scholars and the experience of practitioners are needed to build complete and valid models. But the perspectives and priorities of people who have been most directly affected by the problem (or are most likely to be affected by the solution) can unearth hidden features of the social system that might never have been captured in research and policy before. Equally important, when people participate in the production of knowledge, they are far more likely to voluntarily support the findings—in this case, the collective strategy—than they would be had they been excluded from the process. Getting broad buy-in is essential to operationalizing collective strategies.
  • It must be iterative. The seven questions are not intended to be answered sequentially. Ideally, the process would start with a brief scoping exercise to get preliminary answers to all seven questions to identify where the most effort is likely to be needed later. Important factors (Q4) might emerge early on that have never been studied systematically (“social media bots affect norms”); they can be provisionally incorporated into the model (Q5) while researchers plan experiments to estimate key thresholds and multipliers (“bots have no effect until they reach a critical mass of 10% population”) and collect data on the players (Q7) that are most influential (e.g., bot farms, empathy trolls). Agile modeling techniques are useful here.

Participation and iteration are especially important once answers to the questions have been transferred to the “systems integrator” knowledge platform.

A Platform for Collective Success

The technology to build powerful knowledge platforms is widely available. A collective strategy platform can begin as a simple subscription-based data portal or dashboard that organizations can use to keep track of everyone and everything that significantly affects their work.

The information can be presented according to the 7Q structure so everyone working on a particular problem (Q1) can see the key indicators (Q2) and leverage points (Q6) their sector has identified; review the models (Q5) they’ve collectively produced; run simulations (Q6) to ensure their organizational strategies are consistent with the collective strategy; and look up data on the most consequential players in a directory (Q7) that includes “bad” actors as well as problem solvers.

The data and models underlying the collective strategy should be continuously updated as research produces new insights and as people in the real world do things that change their effects on the problem ecosystem. The platform managers should provide regular updates summarizing these important changes, especially factors (Q4) nearing critical thresholds, players (Q7) making big moves, or significant changes in risks to progress on the issue in question (Q3).

As more subscribers join, the platform should be upgraded so they can use it to manage their internal monitoring and evaluation programs; data from many subscribers potentially makes real-time forecasting possible. Incorporating artificial intelligence as well could dramatically speed model development and exponentially increase the platform’s power.

Upgrading Our Collective Capability

It has been emphasized here and elsewhere that uncoordinated efforts to solve complex problems can make things worse. Yet multistakeholder partnerships often find genuine collaboration difficult to pull off. Policymakers often struggle to achieve “whole of government” action. The international aid system talks more about “donor harmonization” than they implement it. Collective action is hard.

I believe—and I treat this as a hypothesis to be tested—that large-scale collaborations like these, with all significant actions well harmonized toward shared goals, might not always be necessary for collective success. Many dynamic models of complex problems demonstrate that a few factors and a few actors have oversized influence on system-level outcomes. Sometimes, it’s not the most expected or obvious factors but some overlooked dynamic that ends up being key.

The 7Q method is designed to identify key leverage points in a problem-solving system—which in some cases might involve maybe half a dozen factors in a hundred-variable model. But what makes 7Q stand apart from other strategic frameworks is that it also identifies the groups that have the greatest influence over those leverage points.

In those cases, you don’t need a giant “whole-of-society” collaboration to make progress; you just need to know who the strategic actors are, then work to either scale up or neutralize their influence, depending on what the collective strategy suggests is needed.

That means charitable foundations, impact investors, philanthropists, government agencies, and communities who want to be more strategic in their social investments can initiate a 7Q process to figure out who those strategic actors are. It can help you identify who to partner with (the entities that have the best chance of making positive change in the highest-leverage factors); whose influence needs to be contested (those actors producing the most damaging dynamics); whose work you can safely ignore (everyone who has no real effect on strategic outcomes); and what efforts and innovations might need to be incubated (those strategic factors where no consequential and constructive efforts are currently being made to change them).

But everyone else who wants to make the world better now has an opportunity to upgrade their problem-solving capability as well. A collective strategy makes it possible for anyone working on any aspect of a large-scale social problem to find the people and organizations that affect their work—and it gives them a framework for working together to discover what it would actually take for all of them to achieve ultimate success.

For Further Reading

Most of the titles below are hot-linked to the original publication or further information about it.

On Dual-System Problems

Robert D. Lamb and Melissa R. Gregg, The Dual-System Problem in Complex Conflicts, Strategic Studies Institute, Army War College, forthcoming

Robert D. Lamb, “Unlearned Lessons and the Dual-System Problem,” Strategic Studies Institute, Army War College, January 2017

Robert D. Lamb and Melissa R. Gregg, “Preparing for Complex Conflicts,” policy brief, United States Institute of Peace, October 2016

On Problem-Solving Capability Levels

Robert D. Lamb, Rethinking Governance and Statecraft in a Cybernetic World, monograph in progress

Robert D. Lamb, “Forget Turing: Machines Have Already Passed a More Important Test,” blog post, rdlamb.com, June 21, 2017

Robert D. Lamb, “Introduction to Governance Systems Analysis,” lecture, The Hague Symposium, July 10, 2015

Robert D. Lamb and Brooke Shawn, “Is Revised COIN Manual Backed by Political Will?” Center for Strategic & International Studies, February 6, 2014

Robert D. Lamb, “Formal and Informal Governance in Afghanistan: Reflections on a Survey of the Afghan People,” Occasional Paper no. 11, The Asia Foundation, April 2012

On Collective Strategy and the 7Q Method

Nancy Hayden, Asmeret Naugle, Len Malczynski, and Robert D. Lamb, “Fixes that Fail or Failure to Fix? Understanding the Dynamics of Policy Uptake,” working paper

Jed Emerson and John Richardson, “eDemocracy, an Emerging Force for Change: Mapping the Ecosystem for Impact and Investment,” Stanford Social Innovation Review, forthcoming

David Peter Stroh, Systems Thinking For Social Change: A Practical Guide to Solving Complex Problems, Avoiding Unintended Consequences, and Achieving Lasting Results, Chelsea Green Publishing, 2015

Kim Warren, “Agile SD: Fast, Effective, Reliable,” white paper, Strategy Dynamics Ltd., 2015

Robert D. Lamb, “How Will We Learn?” keynote address, Center for Strategic & International Studies, October 21, 2014

Larry Yeager, Thomas Fiddaman, and David Peterson, “Entity-Based System Dynamics,” white paper, Ventana Systems Inc., March 19, 2014

Robert D. Lamb, “Measuring Absorptive Capacity (MAC): A New Framework for Estimating Constraints,” policy brief, Center for Strategic & International Studies, 2013

Robert D. Lamb, “Beyond Lessons Learned: Reengaging the Public about Civilian Capabilities,” Stability Operations 9, no. 1, October 2013

John Kania and Mark Kramer, “Collective Impact,” Stanford Social Innovation Review, Winter 2011

About This Brief

Foundation for Inclusion (FFI) Research Briefs are intended to summarize recent and ongoing scientific research that is likely to be of interest to people working to make the world better. This first issue summarizes the foundational research FFI was built around, focusing on the conceptual innovations that inspired its collective-strategy framework: the dual-system problem, the six levels of problem-solving capability, the whole–parts–gaps framework, and the seven-questions method. Together these innovations shift the focus from solving problems to fixing problem-solving systems and make it possible to discover the most strategic path to large-scale change.

About the Author

Bob is FFI’s founder and CEO and formerly a visiting research fellow at the Army War College, conflict director at the Center for Strategic and International Studies, and strategist at the Department of Defense. With a two-decade career as a scholar and strategist, he specializes in how complex societies change and founded FFI as a strategy hub and impact incubator—and  the permanent home of social progress.

About the Foundation for Inclusion

The Foundation for Inclusion is a not-for-profit, tax-exempt (501c3) social enterprise dedicated to building a more civil, peaceful, and inclusive world by giving changemakers and innovators the tools they need to solve complex problems in divided societies. More at foundationforinclusion.org.

3 Comments

  1. […] as CEO since September and probably my most important output since then has been FFI’s first Research Brief summarizing the approach we take to solving large-scale problems and a capabilities brief (pdf) […]



  2. […] collaborative process for answering them. This method for building collective strategies is described in greater detail in the inaugural Foundation for Inclusion Research Brief (pdf), published in early […]



  3. […] I’ve already explained the conceptual innovations and foundational research behind our approach to large-scale social change. That brief introduced the capabilities a platform like this would need to be useful for collective action against complex problems. Here I want to explain the changes we made to the website this week and how they relate to our long-term vision for the platform. […]