What is “capitalism?”

I’m giving myself some permission to provide a definition of capitalism as I understand it, and not the absolute best definition of the word. Books have been written about the subject, and the way capitalism works has shifted a lot since the 1700’s when its foundational treatises were written by Adam Smith and his contemporaries. So the definitions I land on are going to be imperfect, incomplete, and only relevant to the present time–a different definition may be more applicable in, say, 1900 or 2100.

What is an “economy?”

First, we need to make sure we’re on the same page about what an “economy” is. The way it’s discussed can make it sound like a vast, unknowable thing that needs to be handled gingerly for fear of ruining it.

An economy is a tool. It falls into the same broad category as a wrench, a government, a personality test, and an atomic bomb. Specifically, an economy is a tool that helps a collective (a family, a city, a country, a world) distribute the resources available to it.

Like other tools, it’s good for some things and not for others. Like other tools, it can be handled properly or improperly. And like a lot of other tools, it’s possible to improve upon its design–either to make it more useful overall, or to make it better at some specific thing you need to use it for.1For example: a hammer with a shock-absorbing grip is an overall improvement, while a rubber mallet is a specific type of hammer that specializes in doing particular tasks well.

This definition is important because of language that’s often used around economic questions. For example, it’s often asserted that socialized medicine would lead to “rationing.” However, since medicine is part of the “economy,” it’s already “rationed”–it’s rationed by the market by setting higher price points for more complicated and rarer products, instead of being rationed by someone making a decision about who gets what. Setting a high price for something is a way of rationing that item to people who have the ability to pay for it.2One of the forces changing the economy today is that there are significant resources on the Internet that don’t need to be rationed, including information obtained through searches, social media posts, even video streams to a lesser extent. Since humanity isn’t capable of consuming all of the resources that one company is capable of producing, this results in a winner-take-all scenario that was extraordinarily rare before about 1998. Google receives nearly 80% of search traffic, but you can’t split up Google’s search engine as though it’s a monopoly because after an initial period of confusion, most of those users would settle on one of the two new engines, making it the “new Google.”

Any given economy has many ways to ration out its resources. In the United States, for example, we ration out emergency services based on people calling 911. In principle, we could let the market take care of emergency services, and you’d either have to sign up for a subscription (like an alarm company) or give them a credit card number before they put out the fire in your house, but at some point we decided to pay for emergency services with tax dollars instead, and ration out those services based on who was asking for them. But whether they’re handled by the market or by 911, centralized emergency services are part of how we distribute resources, and so they’re part of our economy.

Isn’t “capitalism” just a fancy word for “free market?”

“Capitalism” and “free market” are different things. Free market just means that anyone is able to sell a new product to compete with others that already exist in the marketplace. It’s the opposite of “monopoly,” whether that monopoly is owned by a single powerful business or a government entity. It’s a pillar of capitalism, but an economy doesn’t need to be capitalist to have free markets.

I’m not convinced there’s any such thing as a “free market economy.” Just like socialism (discussed below), there are different degrees to which an economy’s markets are free. Some items may be completely unregulated, while other items may be sold from only one vendor or handled outside the market.

Some products and services might be handled better by a free market, while other products and services are better handled by a regulated market or a closed market.3John Stuart Mill says in Principles of Political Economy, “Laissez-faire, in short, should be the general practice; every departure from it, unless required by some great good, is certain evil.” This represents the attitude of later “free-market” economists, including Milton Friedman (who, unlike Mill, held to this dogma his whole life). But careful readers will notice that even Mill allows for regulation when required by “some great good”–and as capital markets have become more complex, “good” and “evil” have also taken on complexity. Mill might have changed his mind about some things if he’d seen the financial instruments that brought about the housing crisis in 2008. You don’t have the ability to sign up for a private fire extinguishing service when you buy a home–that’s a market that isn’t free. Your house may be required to meet local building codes, but you can hire any licensed contractor to make sure it does–that’s a free but regulated market. You can also buy whatever decorations you want to put inside your home that are made by anyone, provided they don’t break other laws (e.g., they aren’t filled with cocaine or made with the fur of endangered animals)–that’s a free and unregulated market.

A market completely free of regulation only remains a free market for a short period of time. Markets left to themselves tend to eat themselves4Through mergers, acquisitions, or winner-take-all competitions. until only monoliths exist that are capable of either decimating or acquiring all their rivals.5Amazon is an example of a company that has been able to acquire or destroy most of its competitors before they came to be big enough to pose a challenge, however, it fails to meet the definition of a traditional monopoly because retailers like Wal-Mart still hold a non-trivial portion of the online market. Regulations are needed to prevent monopolies from forming and to ensure fair competition and consumer transparency (for example, without regulations McDonald’s would never have to tell anyone they’re putting sawdust in their food to make it cheaper).

The idea pushed by neoliberal (free-market) economists like Friedrich Hayek is that the free market is much more effective at distributing resources than any government could ever be. Some economists including Milton Friedman argue that allowing the market to distribute resources also maximizes efficiency and manages to create more resources where previously there were fewer.6A lot of economists believe “efficient market theory” was debunked in the 2008 market collapse. Market pricing itself was thrown off by the irrational behaviors of the actors within the market, proving that market prices could be inefficient for extended periods of time. But there are probably very few people who wouldn’t concede that at least some circumstances require us to offer resources based on need and not the ability to pay, and you won’t find many people who think we should do away with all money and assign resources only based on need either. Most economies will include some elements of free markets and some elements of central planning.

So what is “capitalism?”

Capitalism has a few basic features we’ll be looking at over the next several posts:

  1. Pricing: This is supposed to be the real advantage to capitalism. If a government entity tries to decide how much a new product might be worth, they might end up picking a number out of the blue that really doesn’t reflect everything that goes into making that product or what people could feasibly pay for it. But if a business tries to sell that product on the open market and another business tries to sell a similar product, the two competitors will keep adjusting their prices until they both hit a spot where they’re making acceptable profits or one of them has gone out of business.7This is what’s called a Nash Equilibrium. It doesn’t actually maximize benefit for anyone, it just describes the inevitable price points for competing products and vendors given what one’s competition will do. (There are a lot of problems with market pricing when rational decision-making is compromised and/or when long-term impact is taken into account. We’ll get into that later.)
  2. Value: Everything, no matter how abstract or intangible, can be assigned a monetary value. This includes the value of an idea, a policy, or a human life. It suggests that everything in existence, both real and imagined, can be valued in terms of the amount of money that would be paid for it. Items that have complex value (for example, human beings) are simplified to the relevant value–a human being might be valued based on her accounting skills when she’s being hired at an accounting firm, but would be valued based on her public speaking skills if she’s being hired as a diplomat, or valued based on the cost of a wrongful death settlement when a car company is thinking about recalling its airbags, or valued by the total of all the potential money she would have made over the rest of her life if an economist is trying to determine the economic impact of her early death. It’s a simultaneously practical and horrifying way of thinking about the world.
  3. Means of production: In a capitalist economy, the government doesn’t own assets such as farms, factories, workshops, recording studios, and oil refineries–the means of producing the products we buy. The “means of production,” as they are called, are owned by private citizens, either individually or as corporations, trusts, funds, cooperatives, and similar structures.8The line between “government” and “corporation” is much murkier than politicians make it out to be. It’s not inappropriate to think of the market as a de facto fourth branch of American government. Whoever holds the means of production is able to set rules and determine important aspects of the lives of citizens. In this light, it may deserve the same respect as, say, the judiciary, but it should also be treated as limited.

    The means of production are the source of all wealth. Labor isn’t important to capitalism–it represents a necessary expense in order to extract the value from assets, and should be minimized as much as possible. Capital–or the means of production–is what’s important. Labor is paid with wages that get spent on products. Capital yields profits that are invested into more capital. Labor is always depleting resources while capital is always accumulating them.

    The basic idea put forward by Adam Smith back in the 1700’s is that someone controls the capital–a needle factory, for example–and hires other people to work for him. These employees are able to produce a lot more than they would if they were working alone because they’re able to specialize into the tasks they’re most competent at performing–for example, pulling the wire, cutting the wire, shaping the head, sharpening the tip. Where one person could make, say, 200 needles per day doing every task on her own, four people could make 2,000. In Adam Smith’s ideal world, the workers get more money than they would on their own because they’ve produced more needles, and the factory owner would harvest the excess value–for example, if each employee gets 300 needles worth of wages, the factory owner would still get 800 needles worth of profits.9Adam Smith was idealistic in his illustration. The more likely economic reality of this scenario is that the employees get the lowest wages that the factory owner can offer them, because the factory owner is able to price his needles lower than the needles produced by individual craftspeople. That is, the four people who were selling their own needles at 50 cents apiece now have to compete with a factory selling needles for 40 cents apiece, so they will have to lower their prices to 40 cents and get only $80 a day, or work for the factory for $85 a day, where previously they were able to sell their own needles for $100 a day. The irony is that the factory owner’s profits are lower than they would be if he continued to sell needles for 50 cents and paid the employees $110 a day, but because the free market is competitive, this would leave a lot of room for a competitor to come in and sell needles at lower prices. In any of these scenarios, however, the factory owner comes out better than all of the workers, which is the point.
  4. Capital Accumulation: Not only does capital produce profits, it also accumulates wealth. Most of the things money can buy in the world, from cars to computers to carrots, are depreciating assets. They decay over time or are used up, resulting in a destruction of wealth. But there are some assets, such as land, factories, and brand names, that can become more valuable over time. People who own these things get wealthier without having to work (appreciation),10It wouldn’t be accurate to say that they don’t do any work, but it is accurate to say that the work they do isn’t what makes them wealthy. The capital is what makes them wealthy. Many wealthy people hire an investment advisor to do the work for them and can continue to accumulate wealth while doing no work at all. while people who work and buy things will have to work and buy more things because their things keep getting used up (depreciation). Most people have a little bit of both appreciation and depreciation: you might have a retirement account that holds stocks that appreciate over time, but your clothes depreciate as they wear out and you have to buy new ones. The greater the proportion of appreciating assets you have, the more your wealth accumulates. When defining the valuation of a human being, there are two tiers: People who are valued based on the work they can do, and people who are valued based on the capital they control. A truck driver for McLane Company is valued based on her labor; Warren Buffett, the CEO of McLane’s controlling company Berkshire Hathaway, is valued based on his assets.
  5. Capital Markets: Once separate pieces of capital are pulled together into a mode of ownership, such as a corporation, you can break up that ownership into pieces and sell them to other people. And because everything can be assigned a value, and because selling it on a market in a competitive environment results in an actual price, you can do this with anything. (That’s what got the United States economy into trouble in 2008: lenders taking a bunch of mortgages, packaging them together and selling them, then taking pieces of those packages, packaging them together, and selling them, then hedging some of those against other packages… etc.)
  6. Fluidity: The free market isn’t just for products and capital. Unlike feudalism, people can easily move between different employers under capitalism, even without having to pick up and move to another part of the world. The flip side of this is the degradation of community: since people need to be able to move around easily, your bonds with your neighbors are weaker. Everyone is responsible for her own fate, and many more of our interactions are transactional: instead of confiding in your local priest or preacher, or talking things through with your neighbors, you might pay a therapist or life coach to listen to your problems.11This is what sociologist Ferdinand Tönnies called “Gesellschaft,” which represents a broad and anonymous society, in opposition to “Gemeinschaft,” which represents a specific and familiar community.

As I’ve described in a previous post, capitalism effectively decoupled and abstracted political and economic realities as they existed under feudalism. Where feudalism required wealthy people to own land, capitalism enabled wealthy people to maintain and grow their wealth independent of land.12This was an important transition with the ending of the Malthusian era–where the total wealth available to a population essentially came down to the ability to produce food. Once we invented better ways to produce food, wealth became disassociated from land. This also means violence takes on a different form under capitalism, and warfare and destruction is less direct.

Why capitalism?

Capitalism is underpinned by a belief in the effectiveness of rational (or enlightened) self-interest: that people make the best decisions for their own needs with the information they have available to them.13That people consistently act against their own rational self-interest does not dissuade believers in capitalism. Usually there is a rationalizing belief that irrational actors get edged out of the market, that poor people are either lazy or need to educate themselves, that the “self-interest” part can be irrational but a person will still act rationally to get it, etc. The result is the “invisible hand” of the marketplace moving to fulfill the needs people are ready to pay for: if all of a sudden everybody wants to eat radishes, the price for radishes will go up and more people will grow radishes because they’ll get a better price for them. (In later posts we’ll look at how this breaks down, especially in the age of the Internet.)

Capitalism was originally developed in opposition to the idea of a central regime–usually a monarchy–attempting to make the best determinations about how resources should be used. Under feudalism, a monarchy might grant a certain portion of land to a knight, who would be the caretaker of that land and all the commerce that was conducted on it. As opposed to the visible hand of a ruler distributing resources under a feudal economy, capitalism depended on an “invisible hand” distributing resources based on how much people would pay for a product or service.

It’s helpful to see capitalism as the product of its particular time:14To put capitalism into perspective next to similar ideas in physics, capitalism is like Newtonian physics. We still use Newtonian physics for some purposes, but imagine if technologies refused to take relativity or quantum mechanics into account–we wouldn’t have inventions like GPS or even modern computers.) the Age of Enlightenment, when philosophers and lawmakers alike saw education and rationality as a way of elevating humanity15Enlightenment rationality has a long and problematic history with the concept of race, in which white Europeans were thought to carry the “white man’s burden” of rationality into the rest of the world and elevate all other races of humanity beyond what they would be capable of achieving without the salvation of white rationalism and scientific progress. This belief has influenced modern politics among both conservatives and liberals, long after it became unfashionable to be explicitly and unapologetically racist. out of conflict, poverty, and all our other problems. If most men16Enlightenment rationality also has a long and problematic history with sexism, in which for a long time only men were believed to be capable of rationality, and even today women are frequently dismissed as emotional or irrational. On the other hand, feminists such as Mary Wollstonecraft made appeals to rationality as a way of bolstering their arguments. Even so, it’s been argued that rationalism was the language of the time and not a prerequisite for the success of feminism. became educated and rational, the thinking was, they would be able to lead less rational men and women toward a collective social ideal.

Capitalism was successful in its time due to the invention of new technologies and methods of organizing. Today, the invention of new technologies and methods of organizing are among the reasons we need to develop new economic models and methods that leave capitalism behind.

What about socialism?

There’s a lot of discussion about socialism in the political arena these days. To some people it’s a scary word. Others embrace it whole-heartedly. But what is it?

Socialism is a version of capitalism where the government owns a stake in some industries. Depending on how you define “industry,” most economies are socialist to greater or lesser degrees. For example, in the United States, the government owns most of the military industry and the education industry, as well as a significant stake in the amusement and tourism industry in the form of parks, museums, and monuments, and a pretty big piece of the transportation industry in terms of roads, bridges, waterways, and airports. This isn’t the degree to which the United Kingdom is socialist, where the government owns some television and radio broadcasting channels and the lion’s share of the medical industry, but the United States government owns a pretty large portion of certain industries and so can be called a socialist government already. So the discussion isn’t about whether to be socialist, but about which industries benefit from government involvement and to what degree.

To be clear, I intend to look at issues with capitalism itself, which means the solution may not be socialist in nature. It might require a re-thinking of the assumptions of capitalism, which would necessarily challenge socialism as well.

What about communism?

In my view, communism is also a modification of capitalism. The means of production are held in common by the people, but otherwise it’s intended to work more or less the same way.

In its purest form, communism requires the means of production to be owned by a democratic government and made available to the people based on who will make the best use of it. For example, if one person wants a piece of land to build a farm and another person wants to build a factory on it, there would be a rational, unbiased way of determining whether the factory is needed, whether it’s a good location for a factory, whether the soil and rain conditions are good for farming, etc., in order to decide which person should use that piece of land. In practice, centralized decision-making is the sort of outrageous idea that could only be conceived in an early Industrial Age mindset, when it seemed like rationality would take over the world and everything would become quantifiable.

In the capitalist ideal, some profits flow to workers by way of increased wages, while the rest flow to the owners by way of capital accumulation (whether or not that actually works is another question). In the communist model, the state takes the place of the owner. It’s a shortcut to the dark capitalist utopia envisioned by Arthur Jensen in the movie Network: “one vast and ecumenical holding company, for whom all men will work to serve a common profit, in which all men will hold a share of stock, all necessities provided, all anxieties tranquilized, all boredom amused.”

The real flaw in communism is not that it kills worker motivation but that having a centralized decision-making process about who gets to use assets is extremely prone to corruption and the short-sightedness of the central decision-makers.((Even if it is, in principle, controlled by all the people. In principle, the United States government executes the will of the people, and yet somehow only 17% of people as of the latest Gallup poll approve of Congress. That number hasn’t been above 50 percent since 2003. It’s hard to imagine people would be happier with Congress if it were directly in charge of all the jobs, resources, and products in the country. Put the no-nonsense Jack Welch in charge and you might stifle innovation for a decade, put the visionary Steve Jobs in charge and you will get a tremendous amount of innovation that fits his particular vision but nothing outside of that. Even the perfect candidate would be limited by human decision-making, and let’s be honest when assessing our human tendencies toward rationalizing corruption.

What’s wrong with capitalism?

It’s been my conviction for the past several years that we are seeing a shift that heralds the end of capitalism as a ruling ideology in our world. This isn’t because we’re getting smarter–it’s because capitalism, and its variations including socialism and communism, fail to address our economic realities.

Some people think this is a bold claim, some people think it’s ridiculous, some people might even think it’s anti-American. But to me it’s about as obvious as pointing out that winter is coming: it’s getting colder, the leaves are falling, we all know where that leads. And if capitalism is ending, we’d better get ready to replace it with something.

I don’t have the solution for what replaces capitalism. But this is a conversation I’m ready to have with other humans who want to see a world in which ordinary people can produce meaning and improve our collective quality of life.

What follows will be a series of posts pointing out what I see as some of the key strengths and weaknesses of capitalism, so that we can begin to talk about how we might do better in the future.

If I left anything important out of my definition of capitalism here, please feel free to comment below. Keep the comments on this post to a discussion of the definition here–there are going to be a lot of posts in this series and there will be plenty of time for criticism and theorizing. It will be helpful to have a solid working definition to start.

Talk to you soon, fellow humans.

References   [ + ]

1. For example: a hammer with a shock-absorbing grip is an overall improvement, while a rubber mallet is a specific type of hammer that specializes in doing particular tasks well.
2. One of the forces changing the economy today is that there are significant resources on the Internet that don’t need to be rationed, including information obtained through searches, social media posts, even video streams to a lesser extent. Since humanity isn’t capable of consuming all of the resources that one company is capable of producing, this results in a winner-take-all scenario that was extraordinarily rare before about 1998. Google receives nearly 80% of search traffic, but you can’t split up Google’s search engine as though it’s a monopoly because after an initial period of confusion, most of those users would settle on one of the two new engines, making it the “new Google.”
3. John Stuart Mill says in Principles of Political Economy, “Laissez-faire, in short, should be the general practice; every departure from it, unless required by some great good, is certain evil.” This represents the attitude of later “free-market” economists, including Milton Friedman (who, unlike Mill, held to this dogma his whole life). But careful readers will notice that even Mill allows for regulation when required by “some great good”–and as capital markets have become more complex, “good” and “evil” have also taken on complexity. Mill might have changed his mind about some things if he’d seen the financial instruments that brought about the housing crisis in 2008.
4. Through mergers, acquisitions, or winner-take-all competitions.
5. Amazon is an example of a company that has been able to acquire or destroy most of its competitors before they came to be big enough to pose a challenge, however, it fails to meet the definition of a traditional monopoly because retailers like Wal-Mart still hold a non-trivial portion of the online market.
6. A lot of economists believe “efficient market theory” was debunked in the 2008 market collapse. Market pricing itself was thrown off by the irrational behaviors of the actors within the market, proving that market prices could be inefficient for extended periods of time.
7. This is what’s called a Nash Equilibrium. It doesn’t actually maximize benefit for anyone, it just describes the inevitable price points for competing products and vendors given what one’s competition will do.
8. The line between “government” and “corporation” is much murkier than politicians make it out to be. It’s not inappropriate to think of the market as a de facto fourth branch of American government. Whoever holds the means of production is able to set rules and determine important aspects of the lives of citizens. In this light, it may deserve the same respect as, say, the judiciary, but it should also be treated as limited.
9. Adam Smith was idealistic in his illustration. The more likely economic reality of this scenario is that the employees get the lowest wages that the factory owner can offer them, because the factory owner is able to price his needles lower than the needles produced by individual craftspeople. That is, the four people who were selling their own needles at 50 cents apiece now have to compete with a factory selling needles for 40 cents apiece, so they will have to lower their prices to 40 cents and get only $80 a day, or work for the factory for $85 a day, where previously they were able to sell their own needles for $100 a day. The irony is that the factory owner’s profits are lower than they would be if he continued to sell needles for 50 cents and paid the employees $110 a day, but because the free market is competitive, this would leave a lot of room for a competitor to come in and sell needles at lower prices. In any of these scenarios, however, the factory owner comes out better than all of the workers, which is the point.
10. It wouldn’t be accurate to say that they don’t do any work, but it is accurate to say that the work they do isn’t what makes them wealthy. The capital is what makes them wealthy. Many wealthy people hire an investment advisor to do the work for them and can continue to accumulate wealth while doing no work at all.
11. This is what sociologist Ferdinand Tönnies called “Gesellschaft,” which represents a broad and anonymous society, in opposition to “Gemeinschaft,” which represents a specific and familiar community.
12. This was an important transition with the ending of the Malthusian era–where the total wealth available to a population essentially came down to the ability to produce food. Once we invented better ways to produce food, wealth became disassociated from land.
13. That people consistently act against their own rational self-interest does not dissuade believers in capitalism. Usually there is a rationalizing belief that irrational actors get edged out of the market, that poor people are either lazy or need to educate themselves, that the “self-interest” part can be irrational but a person will still act rationally to get it, etc.
14. To put capitalism into perspective next to similar ideas in physics, capitalism is like Newtonian physics. We still use Newtonian physics for some purposes, but imagine if technologies refused to take relativity or quantum mechanics into account–we wouldn’t have inventions like GPS or even modern computers.) the Age of Enlightenment, when philosophers and lawmakers alike saw education and rationality as a way of elevating humanity((Enlightenment rationality has a long and problematic history with the concept of race, in which white Europeans were thought to carry the “white man’s burden” of rationality into the rest of the world and elevate all other races of humanity beyond what they would be capable of achieving without the salvation of white rationalism and scientific progress. This belief has influenced modern politics among both conservatives and liberals, long after it became unfashionable to be explicitly and unapologetically racist.
15. Enlightenment rationality has a long and problematic history with the concept of race, in which white Europeans were thought to carry the “white man’s burden” of rationality into the rest of the world and elevate all other races of humanity beyond what they would be capable of achieving without the salvation of white rationalism and scientific progress. This belief has influenced modern politics among both conservatives and liberals, long after it became unfashionable to be explicitly and unapologetically racist. out of conflict, poverty, and all our other problems. If most men((Enlightenment rationality also has a long and problematic history with sexism, in which for a long time only men were believed to be capable of rationality, and even today women are frequently dismissed as emotional or irrational. On the other hand, feminists such as Mary Wollstonecraft made appeals to rationality as a way of bolstering their arguments. Even so, it’s been argued that rationalism was the language of the time and not a prerequisite for the success of feminism.
16. Enlightenment rationality also has a long and problematic history with sexism, in which for a long time only men were believed to be capable of rationality, and even today women are frequently dismissed as emotional or irrational. On the other hand, feminists such as Mary Wollstonecraft made appeals to rationality as a way of bolstering their arguments. Even so, it’s been argued that rationalism was the language of the time and not a prerequisite for the success of feminism. became educated and rational, the thinking was, they would be able to lead less rational men and women toward a collective social ideal.

Capitalism was successful in its time due to the invention of new technologies and methods of organizing. Today, the invention of new technologies and methods of organizing are among the reasons we need to develop new economic models and methods that leave capitalism behind.

What about socialism?

There’s a lot of discussion about socialism in the political arena these days. To some people it’s a scary word. Others embrace it whole-heartedly. But what is it?

Socialism is a version of capitalism where the government owns a stake in some industries. Depending on how you define “industry,” most economies are socialist to greater or lesser degrees. For example, in the United States, the government owns most of the military industry and the education industry, as well as a significant stake in the amusement and tourism industry in the form of parks, museums, and monuments, and a pretty big piece of the transportation industry in terms of roads, bridges, waterways, and airports. This isn’t the degree to which the United Kingdom is socialist, where the government owns some television and radio broadcasting channels and the lion’s share of the medical industry, but the United States government owns a pretty large portion of certain industries and so can be called a socialist government already. So the discussion isn’t about whether to be socialist, but about which industries benefit from government involvement and to what degree.

To be clear, I intend to look at issues with capitalism itself, which means the solution may not be socialist in nature. It might require a re-thinking of the assumptions of capitalism, which would necessarily challenge socialism as well.

What about communism?

In my view, communism is also a modification of capitalism. The means of production are held in common by the people, but otherwise it’s intended to work more or less the same way.

In its purest form, communism requires the means of production to be owned by a democratic government and made available to the people based on who will make the best use of it. For example, if one person wants a piece of land to build a farm and another person wants to build a factory on it, there would be a rational, unbiased way of determining whether the factory is needed, whether it’s a good location for a factory, whether the soil and rain conditions are good for farming, etc., in order to decide which person should use that piece of land. In practice, centralized decision-making is the sort of outrageous idea that could only be conceived in an early Industrial Age mindset, when it seemed like rationality would take over the world and everything would become quantifiable.

In the capitalist ideal, some profits flow to workers by way of increased wages, while the rest flow to the owners by way of capital accumulation (whether or not that actually works is another question). In the communist model, the state takes the place of the owner. It’s a shortcut to the dark capitalist utopia envisioned by Arthur Jensen in the movie Network: “one vast and ecumenical holding company, for whom all men will work to serve a common profit, in which all men will hold a share of stock, all necessities provided, all anxieties tranquilized, all boredom amused.”

The real flaw in communism is not that it kills worker motivation but that having a centralized decision-making process about who gets to use assets is extremely prone to corruption and the short-sightedness of the central decision-makers.((Even if it is, in principle, controlled by all the people. In principle, the United States government executes the will of the people, and yet somehow only 17% of people as of the latest Gallup poll approve of Congress. That number hasn’t been above 50 percent since 2003. It’s hard to imagine people would be happier with Congress if it were directly in charge of all the jobs, resources, and products in the country.

Listen to the Opinion, Speak to the Experience Part 2

“For acquired knowledge cannot be divorced from the existence in which it is acquired.” – Dietrich Bonhoeffer

It’s been pointed out to me that my previous post is a bit confusing. Granted, it’s a topic that’s probably worth writing several books, and a skill that can take years of personal development. But I want to drill down to a core that’s useful even in the short-term.

There is no such thing as objectivity among humans. (As software people are fond of saying, “It’s not a bug, it’s a feature.”) In collaboration with one another, we represent a wide array of experiences and we have each filtered out what we have found to be the salient points that we apply as broad rules of the world. This is a cognitive belief, or what we call an “opinion.”

But behind the opinion is the semi-instinctual gut feeling that is our initial filter. This is an emotional belief: a reaction, derived from our experience, that we first feel and then attempt to understand through logic and words. (I say “semi-instinctual” because highly developed, balanced individuals can actually inform and change their emotional beliefs.)

So, when we are dealing with people–whether it’s working toward consensus at a meeting, motivating a co-worker, or addressing a client’s concerns–we are dealing with a complex of emotional beliefs, masquerading as opinions.

Particularly in business, we’ve been taught to act as though the world is a rational place–or at least, that it can be made rational. And so when we encounter conflicts in opinions, we take all the facts and information from those opinions and try to reconcile them. When we can’t, we start throwing out those that don’t agree with our views until we come up with a patchwork of ideas that meshes together. Or worse, we split the difference between competing opinions and call it “compromise” just to get people on board.

The message of this process is that not every experience is valuable. If I’ve contributed my opinion and it’s been thrown out, it means that I am wrong and my perspective is useless (according to whoever is throwing it out).

But there are reasons for every opinion that are relevant to each solution. If I have a difference of opinion from everyone else in the room, it means I have an important experience to contribute–even if my opinion, the product of that experience, doesn’t bear with reality.

So much of our focus in management (and even leadership) is on getting the facts, the efforts, the opinions to fit together into a whole. And so we may often end up with solutions that are like an exquisite corpse: a too-elaborate tacking-together of mismatched parts that could never be functional.

What if, instead of trying to mesh together a patchwork of opinions, we instead undercut the opinions and worked to form an understanding of the human experience underlying the problem? What if there were no relevant experiences that didn’t matter? What if an opinion, which we often use as a way of rationalizing our emotional beliefs, is actually a lens we can look through to find the experiences that are most important to what we’re doing? Could we find a way to address the whole reality of our human experience of a problem, instead of presuming that our years of experience or our level of mastery elevate us toward perfection?

I’m not sure of the answer, but I do know that developing my own emotional maturity and my own ability to see through the eyes of others is one of the skills I value most in my business experience. This post is my own opinion: the way that I make sense of my experience. I look forward to being informed by yours.

Listen to the Opinion, Speak to the Experience

We each have at least two sets of beliefs: cognitive beliefs and emotional beliefs. Which one do you believe controls you?

You’re likely to say your cognitive beliefs–because it’s your cognitive self that is analyzing the question, and that part of yourself wants to believe it is dominant. That it has the power to bully your emotional self into agreeing with it.

But if we were all governed by our rational selves, we would look at the same facts, see the same things, and form the same opinions. There would be no public debate, and we certainly wouldn’t have the incessant raving of rabid pundits on every form of media.

My emotional beliefs determine which facts are more important than others, which virtues are more significant than others, which vices are more destructive than others. They are the substance of all my conflicts with my lover, my mother, my best friend, my boss.

But my cognitive self wants to believe it’s in control. And so it formulates cognitive beliefs–what we call “opinions.” These opinions form a shield around our emotional beliefs, which is why we hold onto opinions so dearly. To expose our emotional beliefs would leave them open to invalidation.

To measure and count and address the opinions of people is to be a representative, not a leader. A leader isn’t concerned with opinions, she is concerned with experiences.

Consider the myriad experiences in the debates over immigration: legal immigrants with illegal-immigrant friends and family who risked their lives to cross the border; legal immigrants who struggled through a complex system; immigrants whose legal status is threatened or has slipped; union workers put out of work by immigrants; refugees from physical and economic violence; citizens who live close to violent border towns; illegal parents of legal children; kids who grew up with immigrant parents or grandparents. Every one of these people (and more) has his own experience that informs his opinions about immigration.

Phenomenology, the study of experiences, adjoins the fields of philosophy and anthropology. It’s a field that has gained some notoriety lately through books such as The Moment of Clarity, which describes case studies using anthropological techniques to inform business decisions at companies like LEGO and Intel. It also helps to turn this inquiry inward, to observe not just the experiences of customers but the experiences of the people within my own organization.

When I shout an opinion at you, what I’m saying is, “This is the best way I can see to reconcile my own experience with what I know about the world.” If you attempt to address my opinion, you are saying, “You just don’t know enough about the world.” When you attempt to address my experience, however, you are asking, “How can your experience inform what we know about the problem?” Doing so not only moves a team toward consensus, but promises a better solution.

Of course, it’s not wise to ask, “What experience do you think is driving your opinion?” Nobody wants to turn a business meeting into a therapy session. Instead, try to live like an anthropologist among those you would seek to lead. Watch how they work and observe their environment. Hear the patterns of their complaints and identify their core beliefs. Consider their incentives and responsibilities. Try to become one of them (without taking it overboard and acting like you can do what they do). Always, always ask, “Why?”

Over time, and with practice, you will start to hear the experiences. And as you do, it will become possible to address problems in a real, substantial way, rather than simply speaking to the opinions.

If you agree or disagree, please share your own experience in the comments so that we can all learn from it.

Do This, Not That: Market Versus Social Norms

Dan Ariely makes a distinction between market norms and social norms in the fourth chapter of Predictably Irrational. He touches briefly upon the way that employers mix their messages, dangerously breaking social contracts and making things about money when they are attempting to lead a socially-driven organization.

As the book documents, operating on market norms (i.e., thinking about the money I’m getting in return for the activity I’m doing) can damage productivity even when compensation is considered adequate. But worst of all, it can damage relationships when we assumed we were operating on higher terms–social norms like trust, reciprocity, and friendship. And we can’t mix the two: once we perceive that our efforts are being valued according to market norms, that’s the mindset we use for the entire interaction.

The next era of commerce will not be kind to organizations that depend on market norms, except as perhaps a back end, business-to-business protocol. For the most part, those things that are driven by competition, price, and data can be outsourced to computers and become a secondary function of people-facing businesses, businesses that use humans for those things humans are uniquely capable of accomplishing.

If you’re still using market norms to run your business, it’s best to start weeding them out now, before they relieve you of all your self-motivated people and leave you with half-hearted key-punchers.

Here are a few “do this, not that” guidelines for common business practices:

  1. Pay healthy salaries, don’t track hours. Some businesses require hour tracking, but to the extent that it’s possible your people shouldn’t identify the time they put in with dollar amounts. Doing so puts them in a market mindset: Am I getting enough money to be worth what I’m doing? Paying healthy salaries instead removes market questions from their minds, and has the potential to make the rare transformation of money into a social contract: the business is a community that takes care of your needs, rather than an employer compensating you for your activity. This is the genius behind Netflix’s policy to pay employees as much as they would pay to keep them: there’s no need for employees to ever negotiate salary or think about how much their work is worth, so they operate on a basis of trust and social contract rather than constantly competing with the employer for a fair wage. Even better if employees have direct deposit, where the money simply appears in their accounts as if by magic.
  2. Appeal to social contract, don’t talk about money. It should go without saying that you should never bring up the fact that you’re paying an employee, or use money as a bargaining chip for a change in behavior. They’re already aware that a threat to their position in the community is a threat to their livelihood. Focus on the social contract rather than the monetary transaction. Are they letting down their co-workers? Are they hurting their ability to make a difference in the organization? Talk about those things. If you have to mention money, it’s already a lost cause. (If they’re the ones bringing money into it, you might as well address their concerns–they’re already thinking in market terms. Take it as a form of feedback on your ability to keep market norms out of your business, and consider whether the issues raised might affect other people as well.)
  3. Make your people financially secure, don’t cut costs at their expense. If your employees have to be worried about paying the rent, covering bills, and eating, then they are already thinking about their jobs in terms of market norms. If you’re going to employ someone, make sure you’re ready to pay enough that they don’t have to be concerned about the basics of life. That includes health care, child care, and retirement. Ariely and James Heyman report that people who perceived themselves as paid inadequately lost as much as a third of their productivity at a very simple mechanical task (forget creative problem solving), and that’s without factoring in any worries about feeding their children. And if Costco is any indication, paying a living wage is a clear path to sustainable business.
  4. Share successes, don’t pay bonuses. This is a tricky one: Traditionally, bonuses are the way you share successes. But paying bonuses can create a clear line between the actions of an employee and the money, turning the action into market-regulated action rather than social-regulated action. There are different ways of accomplishing essentially the same thing. One is to reframe the concept of compensation entirely, as with my post on taxation. If employees interpret the amount they earn not as a payment from you but as something they are accomplishing with you, it may be possible to avoid activating market norms. Another way is to award the bonus as an in-kind gift–but this is fraught with pitfalls. Having the employee choose the gift causes the employee to think about the monetary value; choosing the gift for the employee puts one in danger of choosing something the employee doesn’t want or need; and having co-workers choose may invite comparison and market-norm thinking among the co-workers.
  5. Show loyalty, don’t dig moats. There are already a lot of financial obstacles to leaving a job. Creating new ones causes your people to think about the job in terms of their financial need instead of thinking about the social contract. Instead, you should make it as easy as possible for them to leave–and challenge yourself to convince them they shouldn’t. To the extent your people feel that they are with you by choice and not by necessity, they will be more likely to act on social norms instead of market norms.

It can be difficult to manage the financial needs of the business while operating on social norms, but undermining the social norms can quickly undo all the effort you’ve placed into creating them. If you start by thinking of your organization as a community, a family, or a nation, you will be on more solid ground. And when in doubt, leave the money out of it.

Who Is the Mother of Invention?

You’ve heard that “necessity is the mother of invention.” It’s a proverb that’s likely over 500 years old. But what does it mean?

The saying might recall Captain Kirk calling down to Scotty in engineering, and Scotty iconically replying, “She canna take much more, Cap’n!” Fans of the show 24 similarly joke about Jack Bauer telling Chloe to “just do it!” as the push she needs to make the impossible happen. And let’s not forget the ingenious agent Macgyver. Our culture is rife with the myth of the skilled but uncertain innovator solving an impossible problem in an unrealistic time frame simply because it was necessary. This kind of resourcefulness is a cornerstone of Americans’ beliefs about economics and the world.

But the question is: How true is it? Not the one-in-a-million stories we pluck from the biographies of rags-to-riches businessmen, but the kind of everyday invention and innovation that drives our economy forward. Does desperation drive invention? Or is it something else?

The answer, as with many things, is dependent on the specific definition and context. Desperation as a sense of urgency to meet a particular deadline may spur certain kinds of innovation. But desperation as a state of being–that is, the lack of security around one’s position, as with financial poverty or the ongoing threat of being fired–tends to lock us into survival mode. Desperate people grasp at proven solutions that promise to get them what they need, rather than inventing solutions that may not be sufficient.

That’s not to say these solutions are without risk. But consider someone who agrees to transport bulk drugs: The activity is risky, but the payoff is assured. Innovation requires room to be uncertain about the outcome: Will there even be a payoff? Will it be big enough? You can see this play out at companies that are in danger of bankruptcy: Rather than innovating out of the problem, for the most part they cut down to the basics and try to replicate past success. For every individual that becomes more innovative under that kind of pressure, thousands lose the ability to innovate at all.

If not desperation, then, what drives innovation?

The first parent may surprise you: Laziness. We innovate because the way things are being done is just too much work. This is part of the reason for a disconnect between hours worked and productivity: An innovator can work half as much as someone who doesn’t innovate, and still accomplish more. Laziness gets a bad rap simply because there are so many who misuse it. One of my own innovations early in my working life was a matter of saving myself the tedium of several weeks of repetitive tasks. That innovation was ultimately spread to offices around the country and saved hundreds of hours.

The other is often thought to be exactly the opposite: Enthusiasm. We also innovate because we want something new and better for the future. Our ability to anticipate the future is one of the things that distinguishes human evolution from natural evolution: we can evolve not just for the present circumstances but for the circumstances we anticipate.

Together, laziness and enthusiasm are the push and the pull of an engine. Laziness, better described, is a dissatisfaction with or disinterest in things as they are; enthusiasm is a deep interest in the possibility of things to come. Spitting out what is and sucking in what’s coming is the process that drives innovation forward. Without enthusiasm, laziness becomes pessimistic and defeatist. Without laziness, enthusiasm becomes toothless; if the present isn’t so bad, it’s better to just let that future come on its own.

Necessity may be a parent of invention in at least one sense: We invent things that are useful to us. If we didn’t need it, why would we invent it? This reveals a critical problem with the way innovation is handled in many organizations. Some businesses try to institute an “innovation department.” But isolating the innovators from the problems is self-defeating. An innovation department has to go the extra mile just to understand what problems need to be solved, and may often end up solving problems that don’t exist or aren’t high-priority. The power for innovation is always best placed in the hands of those who experience the need on a daily basis.

Recognition: The Motivational Compass

I’ve discussed removing obstacles and providing feedback. I want to talk about one other way to feed motivation, one that walks a line between intrinsic and external: recognition.

Lack of recognition is a surefire way to kill motivation. In fact, if you really want to destroy someone’s will to work, don’t criticize their efforts–just ignore them. And yet, many leaders seem to operate on the assumption that if something is good it will be self-evident, and end up seeming to ignore the fruitful efforts of those around them.

In American business, we’ve mythologized disruptors who plough forward with complete disregard for the praise or derision of others: Steve Jobs is our Hercules, Elon Musk our Perseus. But this mythology ignores the reality of the human social identity in favor of the fraction of a percent who accomplish radical individual change. It also ignores the reality that the vast majority of what happens in the world–even the vast majority of change–is a product of those who are not disruptors. We idolize the individual who makes an enormous change while downplaying the collective power of millions who make small changes.

And for those millions making small changes, recognition is completely critical. It’s a social compass: we want to know that what we are doing is useful to those around us, to guide our further efforts to be more useful. In ancient times, it was largely self-evident: if I shoe a horse or patch a tent, I can see how it’s useful to my customer. Today, business is so abstract that often the only indication of whether something is useful is the explicit response of the people around us–particularly in remote work environments (e.g., working from home).

I feel recognized when someone to whom I’ve given authority to value my work has evaluated it, found it valuable, and expressed that value back to me.

I’ll use this definition as a jumping off point to discuss the important parts of recognition:

  • someone: Unlike feedback, which can be automated, recognition is an essentially human, social act. The value of recognition is that the phenomenon exists in another person’s consciousness. Consider even the word, “recognition:” making my experience (of another person’s contribution) conscious. Unless the phenomenon exists in human consciousness and is expressed sincerely out of experience, it is false and doesn’t serve the purpose of recognition as a motivator.
  • to whom I’ve given authority: Authority doesn’t necessarily fall along any chain of command. I make the decision to give authority based on my own values. Every action has an intended impact and an intended target, whether these things are conscious or unconscious, deliberate or haphazard. The target of that impact is usually the one to whom I give authority. (This is true because of the converse: the one to whom I give authority is usually the target of my intended impact, even if there’s a more obvious impact on someone else.) However, we may also give authority to others we respect.
  • authority to value my work: The particular type of authority is contextual. The work I’ve done is intended for a specific purpose. To that end, the person who has authority in each instance will depend on the work that is being valued.
  • has evaluated it: Evaluation is a conscious act–it’s not simply taking and using the object, but specifically noting its features and overall usefulness. This is the act of recognition: acknowledging one’s own experience of the work and bringing it to consciousness.
  • found it valuable: Recognizing that someone’s work is useless isn’t helpful when trying to encourage motivation. Even if the work turns out not to be valuable for the specific purpose you intended, try to recognize what is valuable about it. If it’s utterly irredeemable, then the situation may call for feedback but not recognition.
  • expressed that value: These last two steps can sometimes get lost in the act of recognition, when I recognize that something is valuable to me and then go out and use it, while forgetting to report its value. The danger is in believing that recognizing value is sufficient and then keeping that recognition to myself. I not only have to recognize value, but express the value. Expressing the value as I perceive it is enough; even if the work is part of some larger scheme, it doesn’t need to accomplish its ultimate ends to be successful.
  • back to me: This is another point that can be overlooked. Expressing the value you perceive to someone else is great, and can lead to great things. But that’s not the purpose of recognition. Recognition reflects my perception of value back to the person who created that value.

Has this post been valuable to you? What was valuable about it? How could it be more valuable?

Feedback: The Motivation Superpower

Intrinsic motivation, left to itself, can be unfocused. This is especially true across an entire organization. There are ways to improve focus through establishing shared values and getting everyone to tell the same story, but there are also mechanisms for improving the focus of an individual’s intrinsic motivations. Few of these mechanisms are more fundamental than feedback.

I don’t mean peer review forms or a semi-annual sit-down with the boss. I mean simple feedback loops that work throughout every day.

Simple feedback works like this: A subject takes an action, there is a reaction, and information about the reaction is returned to the subject, who can then use the information about the reaction to modify her activity. I touch a hot kettle, the kettle burns my fingers, my nerves send information about my fingers burning back to me, and I pull my hand away. This is how fundamental feedback is. But because so much business in today’s world is abstract, we have to construct feedback loops deliberately rather than expecting feedback to happen on its own.

Lack of feedback can quickly erode motivation. And the more entrepreneurial or “self-starting” a position is, the more important feedback is to the person in that position. Feedback is your sight, like a bat echoing its own songs to understand the contour of the world around it. If you don’t hear an echo, how do you know what to do?

Yet for how fundamental it is, it’s surprisingly easy to forget. And then it’s surprisingly easy to chalk up motivation problems to lack of incentives, or poor leadership, or other priorities getting in the way, when really the people around you are lost in a world that doesn’t echo back at them.

How can you create effective feedback?

Feedback must be immediate, contextual, and apparent. Feedback is a behavioral stimulus–it has to fit both the time and the context of the action that caused it, and it has to be clear and concise in order to reveal information that’s useful for subsequent action.

This doesn’t mean feedback is always a result of things that are done–sometimes it’s the result of something that’s undone. Networking sites like LinkedIn and dating sites like Match.com tend to provide feedback in the form of a percentage completion bar to let you know how “complete” your profile is. Of course, your profile on these sites is as complete as you want it to be–but by creating this bit of feedback, such sites are able to encourage participants to improve the quality of information about themselves without offering any incentive other than having a “more complete” profile.

Feedback is a leadership superpower because all feedback is either grounded on some fixed point (values), directed toward some fixed point (objectives), or both. Thus continuous feedback is a way of aligning the efforts of a team toward the same values and objectives. And if you focus on those ends–values and objectives–when providing feedback, you can effectively avoid micromanagement while getting results that both satisfy your goals and represent your team.

Sometimes as a leader, I may have to manufacture feedback. This may require a shift in perspective: rather than believing there’s no feedback available because something is tied up in political limbo, I may need to provide feedback on the work itself–its quality, its relevance, etc. My team member will be able to take that feedback and apply it to other efforts. As a consequence, they’ll also be creating value that better fits my own vision, since it’s directed toward my feedback.

I may also have to generate feedback for myself. One way to go about this is to establish clear expectations with every completed action. After completing something for which I expect feedback–which does not necessarily mean something that requires “notes” or changes–I can mention the kind of information I want to receive and the date by which I would like to receive it, and then follow-up after the appointed time has passed. Remember this information should be immediate (and contextual), concise, and oriented toward fulfilling values and accomplishing objectives; it should as a result be quick and easy for the requested party to provide.

Proper application of feedback can, on its own, stimulate a lot of action without the addition of artificial incentives. It’s the first step in turning intrinsic motivation outward, but it doesn’t yet offer an actual incentive–merely a reflection. The information reflected back at us also implies specific objectives–something that someone outside of us is looking to find, and therefore something we can work specifically to improve, which we do if we have the intrinsic desire to create something useful for another person. Giving feedback without tying it to any extrinsic reward is the second level of motivational strategy.

What are some effective ways you’ve found to provide feedback to others? What ways have you learned to solicit useful feedback from others?

Are You Destroying Your Motivation?

In traditional economics, motivation is simple: People want stuff. The amount of stuff they want is unlimited, therefore if you want people to do something you give them more stuff, and they will do it for you. Threaten to take away stuff, and they will avoid doing whatever would cause their stuff to be taken away.

The emerging field of behavioral economics takes a more holistic (and realistic) understanding of motivation: Sometimes people don’t want stuff. Sometimes people do things that don’t help them get more stuff. And there are some things you can’t convince people to do no matter how much stuff you give them.

This is critically important to the sudden popularity of gamification. Gamification is a way of creating incentives. But many people have attempted to create incentives based on the assumption that the only two motivators are pain and gain. This approach seems to assume that human beings operate like machines: give us a directive and reasons to follow the directive, and we will.

This thinking is what caused behaviorists to discover the overjustification effect, which is dangerous can be permanently damaging. Dubner and Levitt give a perfect example in Freakonomics of an Israeli day care facility that wanted to discourage parents from arriving late to pick up their children and started charging for late pick-up. The result was that more parents arrived late. The fee for late pick-up had supplanted an intrinsic motivation (guilt over inconveniencing the day-care workers) with an extrinsic motivation (a small fee to compensate for that inconvenience). What’s more, when the fee was subsequently removed, the damage had been done: parents continued to see the late pick-up as a service, but now it was a service they were getting for free.

Businesses often take a similar simple-economics approach to dealing with their own people. Incentivize this, disincentivize that, counter-incentivize something that you’re making more difficult. Much of the bulk and complexity of large organizations can be traced back to complicated incentives and metrics.

So before you do anything, remove the obstacles.

It’s impossible to know whether sufficient intrinsic motivations are there if you’ve piled up a mountain of paperwork in front of them. Sometimes the barriers exist outside your organization, such as the ability to market a new service. If you want a particular activity to occur more often within your organization, start by identifying and removing the obstacles, and then step back. Adding incentives is dangerous and difficult to reverse, and can result in unexpected and undesirable behaviors. If you create a space for something to happen, and your people are aware that space exists, wait and see what comes to fill in that space.

Are Your Taxes Too High?

Many business owners and investors stand squarely on the side of tax cuts. Their belief is that taxes are too high and should be reduced to encourage economic activity. And yet many of them fail to apply the argument to their own nations.

“Wait–taxes?” you say. “I don’t levy taxes.”

With the invention of capitalism, economists re-branded feudal taxation as “harvesting excess value.” But wages can also be seen as a reverse tax on the value produced by workers: where a feudal lord might take a certain amount or percentage, the capitalist allows an employee to keep a certain amount.

This reverse tax was an important invention when capitalism was conceived. It allowed people who didn’t create revenue directly (in the form of crops, manufactured goods, etc.) to create new kinds of value, particularly value that could only be created in concert with other specialists. In turn, workers were given a stable wage and economic and seasonal fluctuations were (in theory) absorbed by the capitalist. So a business became a kind of micro-socialist system within the larger context of capitalism.

Employees are, by and large, grateful for this micro-socialism: generally speaking, employees prefer to have stable paychecks, benefits, and job security–and if they don’t, they can always start their own businesses. But trying to squeeze every cost-saving measure you can out of your employees can be counter-productive, in the same way that increasing taxes can be counter-productive when there aren’t meaningful benefits to match.

While many entrepreneurs face the problem of not paying themselves enough, there are some who pay themselves far too much. And as the size of the company slides upward, owners and executives tend more and more frequently to be out of touch with how much they are taxing their employees–and the cost to their businesses.

This brings up two relevant issues from the world of economics and policy:

  1. What is the appropriate level of taxation to maximize revenue?
  2. What social programs are meaningful enough to justify taxation?

The appropriate level of taxation to maximize revenue follows what is known as the Laffer Curve (although the idea goes back to Arthur Pigou). The basic idea is that tax revenues at 0% and tax revenues at 100% are both zero (which is not entirely accurate, but close enough to be useful). That means that somewhere in between 0% and 100%, there’s a point at which you would receive less revenue if you either increased or decreased taxes, because increasing taxes would discourage revenue-earning activity and decreasing taxes wouldn’t result in substantively more revenue-earning activity.

The second question belies the fact that the peak of the curve can be shifted by many variables–and in fact is always shifting. One of the ways you can create a positive shift–that is, justify increased taxes while also increasing tax revenue–is by implementing meaningful social programs. In the politics, these social programs can be controversial, but in the operation of a business, they are relatively standard: health benefits, flexible hours, etc. They generally fall under the HR umbrella, but can also fit into support departments like printing, IT, and so on.

By this point you may be thinking this over-complicates the issue. The conventional wisdom is that labor is labor, and you compensate people based on the work they do and how well they do it–a simple transaction of value for money.

However, conventional economics has this one wrong. The emerging field of behavioral economics recognizes that not just incentives and disincentives, but context, internal motivations, values, ethics, biases, and other factors affect behavior. When you hire someone, an hour isn’t always an hour. The amount their work is taxed, the context in which they operate, and their emotions toward their work and employer will affect their behavior.

Given this knowledge, you could go with a more libertarian approach–give as much back to your employees as possible–or you could choose to go full socialist.

Lately, many organizations are going full socialist, including The Container Store and Wegmans Food Markets. In addition to providing great benefits, they’ve absorbed nearly all possibility of layoffs, and almost exclusively promote from within rather than recruiting experienced hires.

Netflix employes a different model of socialism: They pay literally top dollar–as much as they would offer to keep you if you got a job offer somewhere else. Management helps you if you’re in a slump, and if your skills just don’t fit the needs of the organization anymore they offer a generous severance package, including placement assistance.

There are two interesting traits in these socialist models: they all experience an increase in revenue that more than makes up for the reduction in employee taxes, and none of them offers the outrageous benefits offered by Google or other Silicon Valley heavyweights.

Which brings us to the question: What makes a meaningful social program (a.k.a. benefit)? Google is famous for having extravagant campuses with free meals, on-site massages, and a host of other benefits. But these only provide minor incremental value because they are simply “free stuff.” The same is true if you give your employees gift cards or tickets to a sports game. These aren’t really fostering a better environment, they’re just “free stuff.”

Meaningful social programs remove obstacles and increase feelings of security and freedom. They promote stability and peace-of-mind, they remove distractions or red tape, and they let you know that if something happens–if you have a child or have to take care of an elderly parent, if you have a bad quarter or even a bad year, etc.–your organization has your back. Each program must have a specific intent behind it that removes worries and stigma, provides a safe environment, or better enables problem solving and innovation. Especially for those who are budgeting a start-up, let these criteria be your guide.

For a lot of people reading this, extravagant social programs won’t be within the realm of possibility anyway. You might already be paying your employees less than they could get somewhere else. But it’s not so much an issue of whether you’re paying them less–so long as they’re able to cover their cost of living–but whether you’re taxing them fairly and giving them the tools they need to close the gap with what they could earn elsewhere. Help them–or sometimes just free them–to create new value and bring in new business. Not every potential employee will be excited about the idea, but the good ones will see the opportunity and jump for it.

There’s a lot more to be learned from macroeconomics and tax theory if you have a large business or your business is growing, but I’d love to hear your thoughts and answer any questions in the comments.

Your Organization Is a Nation

Your organization is a nation.

It isn’t like a nation. It doesn’t have the properties of a nation. It is a nation.

Your organization starts with territory. For countries, the territory is geographical: the southern boundary of much of Canada is the 49th Parallel, and Mayalsia is bounded in part by the Golok River. For most businesses, the territory is intellectual: Amazon’s territory is its online retail site, Johnson & Johnson’s territory is its products, etc. The territory may have physical properties–offices, factories, warehouses, etc.–but it is intellectual in nature.

Your organization has sovereignty within its intellectual territory, which means it can conduct its business more or less the way it wants. Your organization has governance, a.k.a. “government,” to guide and regulate the actions within its territory. And of course, your organization has its own culture, with its own language, beliefs, and customs.

Because your organization is its own nation, there are a number of lessons to be derived from public policy and economics. It also faces dangers similar to those faced by world governments, even if it’s at a much smaller scale and the results look different: a revolt may look like an internal power struggle, but it may also be a mass exodus. And if your organization is large enough, it has its own subcultures that have to work with one another.

This week, I’ll be discussing some of the practical ramifications of this idea that your organization is a nation.