I often return to the question of whether something is wrong with the structure of our society. There are certainly many recurrent phenomena unpleasant enough to deserve the name of symptom, but do these indicate an underlying pathology?
Suppose that we model each person as having a collection of needs or desires, and suppose that the objective of society is to maximize the sum of desires satisfied over all living people. We take two precautions here; using the sum rather than the mean to avoid incentivizing the reduction of population, and using living people rather than potential people to avoid calculations that favor the future over the present. In order to efficiently meet needs, we could construct ways for each person to make those needs legible, and use brokers to go between parties who are willing to exchange goods and services. We could simplify trading with the introduction of a fungible representation of value, and define standardized categories of products to simplify or eliminate the need for brokerage. Welcome to the market economy.
One fundamental problem with such a structure stems from the fact that human desires are not static, and can be manipulated with relative ease. We call the process of manipulating desire advertising, and given this general framing, the profound role it plays in our lives is unsurprising. In the context of a market, generating desire is usually reduced to generating demand for a product.
The practice of generating desire can involve great perversities. Illnesses demand treatment; wars demand arms. So any actor can simply play both sides; for example, by manufacturing both the plastics that cause cancer and the drugs that cure it.1 Or by developing social media algorithms that promote particular standards of appearance, generating insecurity in users, and selling fitness and beauty products as ways to compensate for this insecurity (and all the while profiting from the use of the social media platform). A single corporation or other entity that obviously participated in such activities may face public or regulatory scrutiny; but little that involves psychology is obvious, and nothing stops individuals from investing money in ways that benefit from such harms. So the cycle continues: destroy, profit, rebuild, profit.
Our society turns to regulatory and legal systems to limit such harms and punish their perpetrators. But the ability of such systems to intercede depends on the existence of conclusive proof that harm was done. Unfortunately, conclusive proof is rare, and when it exists it is usually most obvious to the perpetrators themselves. The people who are most likely to know whether and how iPads harm two-year-olds are the ones who profit from ignoring that possibility.
A popular but simplistic response to this fact is to scapegoat bad or greedy individuals who are exploiting others for personal gain, and demand some form of justice—imprisonment, maybe, or wealth redistribution (usually in the form of taxation). This is sort of wrongheaded; ideally, society would be robust to the presence of a small number of bad actors. The fact that individuals are sometimes able to cause extensive harm is actually a problem with society. But, more generally, I argue that we would see similar phenomena even in a society with exclusively good, prosocial actors.
The essential way that basically good actors are led to commit bad acts is through distance, either physical, temporal, or psychological. If the harm that one is causing is not self-evident, if it is kept at a distance, it is easy to maintain the illusion of personal morality. Since, as described above, there is profit to be made from various kinds of harm, people are incentivized to construct systems that perpetrate this harm while keeping the consequences and evidence at a distance. This may look like sweatshops in foreign countries, or melting glaciers in remote places, or ignored correlations between smoking and cancer, or a separation of responsibilities that ensures that the one designing the weapons never sees the mutilated bodies. The system self-organizes to carefully protect every perpetrator from any sense of personal responsibility.
In my opinion, this answers the initial question. Is there something fundamentally wrong with our society? I say yes—the incentives for individuals are broken. In an ideal system, someone who is aware of the vague possibility that their actions are harmful should personally benefit from exposing and mitigating the harm; today, they usually benefit from concealing it and obscuring the evidence. I do not yet know what sort of structure would correct this.
I briefly worked at a startup building an AI girlfriend. It was easy for me to see two points of view with regards to my work on that project. The first was the one I had been conditioned to believe—that I was simply a talented engineer working to further my career and my interests through the exercise of my skills. The second was the idea that I was involved in an abhorrent project that would drain unwitting teenagers of their money while damaging their ability to ever enjoy a fulfilling romantic relationship. These two views flipped back and forth in my mind like a sort of Necker cube. I desperately wanted to believe that the former was true, and I was able to convince myself for a time—but in the end I couldn’t; I quit. But I can see, now, how someone a little further removed from the potential harms of their work could justify their actions, and how such a person could still believe themselves to be fundamentally good, and how large the advantage of maintaining that illusion is. This is the problem with our society.
I saw a commercial the other day promoting a drug that may cure eczema in children, at the risk of causing hypertension, heart palpitations, and gastric cancer, and imagined for a moment that the company had four different advertisements for medications curing one of the afflictions and causing the other three. Also, who hears “gastric cancer” and feels encouraged to buy it for their child?
the book "The Managerial Revolution" I hear is a great resource on exactly this topic, a class of people organizing systems to carefully avoid personal responsibility
im not sure the idea that decreasing distance between actors and acts would solve the problem. even at zero distance, it’s hard to tell which actions themselves are harmful or beneficial. like, what if some teen boy got an ai girlfriend and used all the time teenage boys normally spend obsessing over girls into like, curing cancer or some other vastly net good act. i also am not quite sure harmless acts even exist?