A Moral Dilemma: The Crises of Regulating Advanced Technology

• Bookmarks: 378


The Fourth Industrial Revolution is altering our business practices through advanced technologies. These changes encompass market disruptions caused by artificial intelligence (AI), machine learning, big data, and similar technologies across nearly every sector of the economy. While market disruptions are inevitable with growth in innovation, concerns arise from a regulatory perspective: Are we prepared to control a force that prioritizes efficiency and productivity at the expense of people? The development of automated technologies has already displaced low and middle-class skilled workers from the labor market.

Some economists argue that AI technologies are more likely to generate negative social consequences rather than deliver the promised benefits of bridging the skill gap as the “great equalizer.”Beyond the labor market, the tech industry faces ethical challenges regarding data privacy, genetic engineering, or even the weaponization of technology in the military sector. The concerns surrounding advanced technologies are extensive, but they all share a moral dilemma: in a market society where everything is for sale and political systems struggle to act for the public good, who will presume the responsibility of regulation?

The public sector faces a predicament: although it aims to protect citizens from technological market disruptions, it simultaneously desires to harness AI’s fullest potential. Before delving into the potential regulatory pitfalls of the Fourth Industrial Revolution, we must revisit lessons learned from the Third Industrial Revolution, also known as the Digital Revolution. The Digital Revolution began around the 1950s and peaked in the late 1990s, leading to significant transformations in the digital economy and raising concerns about data misuse and privacy breaches. The interconnectivity of devices and systems has heightened susceptibility to cyber threats, while the global digital divide has widened the disparity between technology access, further magnifying existing inequalities rooted in income, education, and geographical locations. While regulators have acknowledged the significance of data protection—as evidenced by initiatives like the European Union’s General Data Protection Regulation (GDPR)—and have recognized the necessity of bridging the digital divide through multilateral investments in low-income countries, the pace of developments in this field far exceeds the capacity of the public sector to regulate it. If the public sector struggled to navigate the complexities of the Third Industrial Revolution, bearing the brunt of its adverse social consequences, how can we expect regulators to seamlessly navigate the intricate landscape of the Fourth Industrial Revolution, innovations far surpassing its predecessors?

Perhaps the resolution to the regulatory challenges posed by advanced technologies lies beyond the capabilities of the public sector and the rigid framework of current regulatory statutes. The crucial first step is to reassess the foundational principles of our economy, particularly the moral compass that influences corporate decision-making and market behavior. A perfect example is the ongoing testimony of social media CEOs before Congress regarding their corporate failures in safeguarding children from online sexual exploitations. The increasing worries regarding the influence of social media platforms on child safety are not a recent development. Over 60% of children and 40% of American adults have experienced cyberbullying. Social media corporations blame parents and schools for failing to adequately address cyberbullying. Parents and schools, in turn, accuse the government of not implementing adequate regulations for online platforms. Congress is now pressuring social media CEOs for not adequately protecting children. In this blame game, both private and public sectors should shoulder responsibility if we consider the crises from a moral and ethical perspective.

In theory, producers throughout the value chain in the technology sector have a moral responsibility to ensure that their products minimize harm to consumers. In reality, corporations prioritize profitable products as long as their processes are “legally sound,” which does not necessarily equate to “morally sound.” This explains why big tech firms have incurred major lawsuits over the years, as markets often lack moral constraints. Michael Sandel argues that we have transitioned from a market economy to a market society, where market values pervade almost every aspect of life, encroaching into areas where they shouldn’t.

The market-driven nature of America’s society often prioritizes economic gains over community well-being. The congressional hearing involving social media CEOs could have been avoided if these companies had taken on their corporate social responsibilities before introducing products that could jeopardize the safety of children. In addition, public sector regulators often fall prey to lobbyists who persuade policymakers in favor of corporate interests. Amid the data breach scandal at Cambridge Analytica, Meta hired global lobbyists and spent $20 million to avoid regulatory consequences in 2020. Even justice and accountability became commodities for sale, complicating the task of regulation.

The challenge of regulating advanced technology reflects a larger issue emerging within America’s social and economic landscape. Centuries of industrialization have fostered an increase in individualism, gradually diminishing discussions about the common good and communal welfare. While the U.S. economy has become highly mechanized and automated, people’s thinking has become similarly streamlined, prioritizing efficiency and productivity over nuanced moral and philosophical considerations. For tech companies, it’s often easier to prioritize profitability when introducing new products to the market, rather than grappling with the potential social consequences or their moral obligations to society. Similarly, our economic theories highlight that firms aim to maximize profit, leaving little room for welfare-oriented thinking. Considering whether or not we are ready to regulate a force like advanced technology, which prioritizes efficiency and productivity over people, the answer is a resounding no. The gap between technological progress and regulatory oversight can leave us vulnerable to unforeseen long-term consequences within domains such as environmental sustainability, social justice, and human rights. Therefore, America is in urgent need of new economic theories and a cultural paradigm where individuals recognize their moral responsibilities to society.

Before the First Industrial Revolution, the majority of the world lived in small-scale societies where the family unit served both social and economic functions. The rise of industrial cities has weakened the family structure and community to the point that concepts such as a “value-based” or “community-based economy” are now alien concepts. Modernity has altered our norms, psychology, and overall culture so much that there is no consensus on what constitutes “morally sound” and “just policy” in regulating advanced technology and beyond. It’s easy to demand corporations and the public sector that is conducting the regulatory oversight to “do the right thing,” but I believe we may have collectively lost sight of our moral compass – a dangerous dilemma. But not all is doomed if America reflects on its pre-industrial beginnings, where the economy was rooted in purpose and mindfulness of others. Legislating policies to regulate advanced technologies cannot solely be based on punitive measures. It needs to reimagine alternative systems that can be profitable for corporations, but not at the cost of others. The first step in this community-based economy is to detoxify ourselves from the residue of mass industrialization by recognizing the humanity of individuals involved in all parts of the economic process, rather than treating them merely as units of productivity.

826 views
bookmark icon