Compared with previous technological advances, the impact caused by the “Artificial Intelligence Revolution” is even greater, and its impact on economics will also be more extensive and far-reaching. The rapid advancement of artificial intelligence technology has had a major impact on all areas of the economy and society. This influence has of course also spread to economics. Many front-line economists have joined the research on artificial intelligence. Many well-known academic institutions have also organized special academic seminars. Organizational scholars have conducted special discussions on economic issues in the era of artificial intelligence. In fact, economists have not recently begun to pay attention to artificial intelligence. At the theoretical level, there are many coincidences between the study of economic decision-making issues and those of artificial intelligence, which determines that there are many cross-cutting issues in the two disciplines.

Foreword

Historically, economists have had at least three theoretical critiques of artificial intelligence:

The first climax was the beginning of the foundation of the discipline of artificial intelligence in the 1950s and 1960s. At that time, many economists participated in the construction of this discipline. For example, Herbert Simon, winner of the Nobel Prize in Economics, was one of the founders of the artificial intelligence discipline and the founder of the "Symbol School." In his opinion, economics and artificial intelligence have a lot in common. They are all "people's decision-making process and problem solving process." Therefore, in the process of artificial intelligence research, he integrated many ideas of economics. .

The second climax was at the beginning of this century. At the time, economics had made a lot of progress in game theory, mechanism design, behavioral economics, and other fields. These theoretical advances were frequently used in artificial intelligence.

Recently, economists have paid attention to the issue of artificial intelligence for the third time. This climax was mainly driven by technological breakthroughs represented by deep learning. Since deep learning technology strongly depends on big data, many discussions in this round of climax focused on data-related issues. In the modeling of artificial intelligence, it also embodies the related properties of economies of scale, data intensiveness, and so on.

As far as the application level is concerned, the interaction between economics and artificial intelligence is more frequent. At present, the application of artificial intelligence can be seen in such fields as financial economics, management economics, and market design.

In general, the recent economics of artificial intelligence can be roughly divided into three categories:

The first type of research is to treat artificial intelligence as an analytical tool.

On the one hand, some technologies of artificial intelligence can be combined with traditional econometrics to overcome the difficulties of traditional econometrics in dealing with big data. Applying these new metrology techniques, economists can explore and construct new economic theories. On the other hand, the development of artificial intelligence has also facilitated the collection of new data. With the help of artificial intelligence, information such as voice and images can be easily organized into data, which provides important analytical materials for economic research.

The second type of research is to use artificial intelligence as the object of analysis.

From an economic point of view, artificial intelligence has a very distinct nature. First of all, artificial intelligence is a "general purpose technology" (GPT) that can be applied to various fields and its impact on economic activities is extensive and far-reaching. Now, when analyzing issues such as economic growth, income distribution, market competition, innovation issues, employment issues, and even international trade, it is difficult to avoid the effects of artificial intelligence. Second, artificial intelligence is a kind of enhanced automation. It will replace the labor force and result in biased income distribution. Third, the current development of artificial intelligence technology strongly depends on the application of big data, which determines that it has a strong scale economy and a scope economy. These two characteristics will have an important impact on issues such as industrial organization, competition policy, and international trade. . All of these features together determine that analyzing and evaluating the impact of artificial intelligence on the real economy should become an important topic in economics research.

The third type of research is to use artificial intelligence as a thought experiment.

As a discipline, economics is based on idealized assumptions. In reality, many assumptions are not valid, so there is a certain gap between the predictions of economics and reality. The emergence of artificial intelligence, in a sense, is to provide economists with a possible environment in line with economic assumptions. This also provides a place to test the correctness of economic theory.

In this article, the author will sort out the economics literature on artificial intelligence in recent years and introduce relevant important literature. Considering that in the above three types of research, the third type of science fiction is strong and the science is relatively insufficient, so this article will not involve such research for a time. For interested readers, you can refer to Hanson (2016) and other representative literature.

I. Introduction to related concepts of artificial intelligence

Before we officially begin discussions on the economics of artificial intelligence, we need to first explain several concepts that are often mentioned in the literature—"artificial intelligence", "machine learning", and "deep learning." At first glance, the concept of artificial intelligence is the largest, and machine learning is a branch of it, while deep learning is a branch of machine learning (Figure 1).

Figure 1: The relationship between artificial intelligence, machine learning, and deep learning

In the broadest sense, artificial intelligence is "the ability of an agent to achieve its goals in a complex environment." Different scholars have different understandings about how agents should achieve their goals. Early scholars believed that artificial intelligence should imitate human thinking and action. Its purpose is to create machines that can think like humans.

However, some recent scholars believe that the human way of thinking is only a specific algorithm. Artificial intelligence does not necessarily imitate humans, but it should allow agents to think and act rationally in a broader context. Some scholars represented by LeCun and Tagmark even believe that blindly imitating the human brain will only restrict the development of artificial intelligence. Artificial intelligence includes many sub-disciplines, such as machine learning, expert systems, robotics, search, logical reasoning and probabilistic reasoning, speech recognition, and natural language processing.

Machine learning is a sub-discipline of artificial intelligence and is a method of implementing artificial intelligence. It uses algorithms to parse data, learn from it, and then make decisions and forecasts on real-world events. Different from the traditional idea of ​​programming specifically to solve specific tasks, machine learning “makes computers have the ability to learn without explicit programming” and finds ways to accomplish tasks by learning a lot of data.

According to the characteristics of learning, machine learning can be divided into three categories: Supervised Learning, Unsupervised Learning, and Reinforcement Learning.

Supervised learning learns a sample of labelled data to find general rules between input and output. For example, for a real estate company, they have a lot of housing properties, as well as data on housing prices. If they want to learn the data and use modeling to find out the relationship between house prices and properties of various properties, then the process is Supervise learning. There are two major algorithms for supervised learning, one is the regression algorithm and the other is the classification algorithm.

The data sample that unsupervised learning faces is unlabeled. The task is to learn the data to find out the underlying hidden laws in the data. For example, art connoisseurs often need to identify the genre of famous paintings. Obviously, there will not be any clearly identified feature information in any picture, so connoisseurs can only increase the subjective experience by enjoying a large number of paintings. Over time, they will find that some painters will use some painting techniques in a fixed way. Through the identification of these techniques, they can identify the genre of paintings. In this process, connoisseurs learn unsupervised learning. Clustering algorithm The main algorithm for unsupervised learning.

Reinforcement learning is performed in a dynamic environment. The learner tries to maximize the reward signal through continuous trial and error. For example, students study homework by doing exercises and each time they finish an exercise, the teacher will correct the exercises and let them know which questions are right and which are wrong. According to the teacher's corrections, students find mistakes and correct mistakes, so that the correct rate continues to increase. This process is to strengthen learning.

Deep learning, which has received much attention in recent years, is a research branch of machine learning. It uses multi-layer neural networks to learn, and combines higher-level representation attribute categories or features by combining low-level features to discover distributed representations of data. Under traditional conditions, due to too little data available for learning, deep learning is prone to problems such as “overfitting” and thus affecting its effectiveness. However, with the rise of big data, the power of deep learning began to manifest itself. The rapid development of artificial technology this year has largely been driven by the development of deep learning.

Second, artificial intelligence as a research tool

Artificial intelligence is a powerful tool for studying economics. On the one hand, machine learning in artificial intelligence has gradually begun to integrate into econometrics and has been applied in economic research. On the other hand, technologies such as speech recognition and text processing also facilitate the collection of materials for economic research. In this section, we do not discuss the application of artificial intelligence in material collection. We only focus on the application of machine learning in economics. For this reason, "artificial intelligence" and "machine learning" can be considered synonymous in this section.

(I) Influence of Artificial Intelligence on Econometrics

1. Econometrics and Machine Learning: From Isolation to Fusion

There are four issues of statistical concern: (1) Prediction, (2) summarization, (3) estimation, and (4) hypothesis testing. Econometrics is a sub-discipline of statistics, so the above four issues are also the subject of concern. However, as a statistic service for economics research, econometrics has a more prominent concern about causation. Therefore, it emphasizes summary, estimation, and hypothesis testing. However, there is relatively little attention to forecasting. Emphasis on the explanation of causality issues, econometrics has paid special attention to the unbiasedness and consistency of the estimation results, and devoted much effort to solving problems such as “endogenous” that may interfere with the consistency of the estimation results.

Machine learning is a more applied discipline than statistics and econometrics. The issues it focuses on are more predictions than exploration of causality. For this reason, classification models such as Decision Tree and Support Vector Machine (SVM), and Ridge Regression and LASSO, which are rarely used in econometrics, are Machine learning is used extensively.

Due to the different focus of attention, the intersection between econometrics and machine learning has traditionally been very small. In some cases, there are even some contradictions between the two. An example was given by Athey (2018): Suppose we have data on occupancy rates and prices for hotels in hand. If we want to use price for pre-occupancy, the resulting model usually shows a positive relationship between occupancy and price. The reason is simple. When the hotel finds itself more popular, it tends to raise its own price. However, if we consider the question of what happens when a company cuts prices, it is a problem of causal inference. At this point, according to the law of demand, if our setup does not go wrong, the resulting model will usually show a negative relationship between the occupancy rate and the price.

However, with the arrival of the era of big data, the intersection between these two disciplines began to gradually increase.

On the one hand, machine learning methods have gradually demonstrated their application value under big data conditions. Traditional econometrics is concerned with small, low-dimension data. For such “small data”, traditional measurement methods can better cope with it. However, when the number and dimensions of data are greatly expanded, these methods are beginning to become untenable. For example, in quantitative analysis, researchers are accustomed to adding a large number of interpreted variables to the model and then estimating them. This works well when the amount of data is small, but when the amount of data is extremely large, its requirements for computing power will be staggering. This requires that researchers must first "dimensionalize" the model and find the most critical explanatory variables. At this time, some algorithms of machine learning, such as LASSO, will play a role.

On the other hand, machine learning can provide inspiration for finding causal relationships. The method of causal inference is usually applied to a well-defined model. In reality, researchers actually do not even know what model to choose. At this point, the machine learning method has its place. Varian (2014) once gave an example of the age and survival probability of a Titanic passenger. He used two methods to analyze this problem. One of them is the Logit model that is commonly used when seeking causality, and the other is the decision tree method commonly used in machine learning. According to the Logit model, there is no significant relationship between the age and survival rate of passengers. The decision tree model shows that children and senior citizens over the age of 60 will have a higher probability of survival, because the elderly and children were allowed to flee first before the Titanic sank. Obviously, in this example, the decision tree can bring us more valuable information. With this information, researchers can build further models for causal inference.

It is worth noting here that if the training set is small, the machine learning algorithm can easily lead to overfitting problems, and its advantages are difficult to manifest. Under the conditions of big data, the impact of overfitting problems is greatly reduced, and the value is revealed.

2. The application of machine learning in causal inference

Susan Athey, a former Microsoft chief economist and professor at Stanford University, once wrote in Science about the role of machine learning in causal inference and policy evaluation. She pointed out that machine learning, which has been more used for forecasting in the past, has a strong application prospect in the field of causal inference. Future econometricians should combine machine learning techniques with existing econometric theories.

The first application of machine learning in causal inference is to replace some of the conventional methods that do not involve causality. For example, in the causal inference analysis, Propensity Score Matching is often used. The first step in using this method is to rely on nuclear estimation and other methods to calculate the propensity scores, and these estimates are difficult to perform in the case of a large number of covariates. In order to filter out useful parts among a large number of covariates, some researchers proposed to apply LASSO, Booting, Random Forest and other algorithms commonly used for machine learning to the process of covariate screening, and then use the results obtained to follow Traditional steps to match.

The second application of machine learning in causal inference is the estimation of the effect of heterogeneity treatment. The causal inferences of the past are mainly carried out in the average sense. The focus of attention is the Average Treatment Effect (ATE). Although such analysis has important value, it cannot meet the needs of practical applications in many cases. For example, when a doctor decides whether to use a therapy for a cancer patient, if he only knows that the therapy can increase the patient's survival time by one year on average, it is obviously not enough. Because the effect of the same therapy on different patients is very different, when deciding whether to use the therapy, doctors need to know further what kind of symptoms the patients with different traits will have when using this therapy. In other words, in addition to ATE, he also needs to pay attention to the Heterogeneous Treatment Effect.

Athey and Imbens (2015) introduced the classification and regression trees commonly used in machine learning to the traditional causal framework and used them to examine the effects of heterogeneity. They compared four different categorical regression tree algorithms—Single Tree, Two Trees, Transformed Outcomes Tree, and Causal Tree. The role of the causal tree method. Wager and Athey (2015) promoted the causal tree approach and discussed how Random Forest can be used to treat heterogeneous treatment effects. Hill (2011) and Green and Kern (2012) used another idea - the Bayesian Additive Regression Tree (BART) to examine the heterogeneity treatment effect. In a sense, it can be considered as a Bayesian version of the random forest method. However, the large sample nature of the BART method is still unclear, so its application still has some limitations.

For a more detailed introduction to the application of machine learning in causal inference, refer to the review by Athey and Imbens (2016). There are two points to emphasize here. First, the intersection of causal inference theory and machine learning theory is not one-way. Some artificial intelligence experts represented by the Turing Award winner Judea Pearl believe that the reason why the strong artificial intelligence technology cannot be broken now is that the existing machine learning theory does not consider causality. If there is no causality, counterfactual analysis cannot be carried out, and the agent will not be able to cope with the complicated reality. Therefore, these scholars suggest that future machine learning should consider the results of causal inference theory and lay the foundation for automatic reasoning. Second, the fastest-growing deep learning in machine learning has so far not played a role in economics research. This may be because the learning process of deep learning is itself a black box and is not suitable for being used as a tool for causal identification.

(B) Application of Artificial Intelligence in Behavioral Economics

Artificial intelligence can provide a way for the study of behavioral economics. Compared with traditional economics, the research method of behavioral economics is very open. It attempts to explain the human behavior that cannot be explained by traditional economics by incorporating theories of other disciplines (such as psychology and sociology). There are many variables that may explain people's behavior. It is called a question of which variables are really useful. At this time, machine learning methods can be used to help researchers select those variables that are truly valuable.

At present, there are some literatures in behavioral economics that borrow machine learning methods. For example, Camerer, Nave and Smith (2017) used machine learning methods to analyze the “unstructured bargaining” problem and used it to help find the behavioral factors that affect the outcome of negotiations. Peysakhovich and Naecker (2017) used machine learning methods to study the risk selection of people in financial markets.

In addition to pointing out the application of machine learning in analysis, Camerer (2017) also contrasts machine learning with human decision making. In his opinion, human decision-making can be considered as imperfect machine learning. The behavioral deficiencies such as overconfidence, seldom correcting errors, etc. can be considered in some sense as "overfitting" problems in machine learning. From this perspective, Camerer believes that the development of artificial intelligence will help humans make more effective decisions.

Third, as the research object of artificial intelligence

As a new technology, artificial intelligence technology has entered all areas of economic life and has had a major impact on all aspects of production and life. At present, many literatures have analyzed these effects. In this section, we will give some brief introductions to these studies in different areas.

(1) Artificial Intelligence and Economic Growth

1. Theoretical Discussion on Artificial Intelligence and Economic Growth

From the perspective of theoretical sources, the discussion about the impact of artificial intelligence on economic growth is actually a continuation of the discussion on the impact of automation on economic growth. Zeira (1998) proposed a theoretical model to analyze the model of automated growth effects. In this model, an industry's products can be produced through two technologies—manual and industrial.

In both technologies, manual labor requires higher labor input, but the required capital investment is even lower. Which of the two technologies is used for production depends on the level of technology. If the productivity is low, then it is more advantageous to rely more on manual technology for production. When productivity exceeds a certain critical point, it will be more cost-effective to use industrial technology instead.

In this way, technological progress will have two effects: First, it will directly increase the production efficiency; second, it will realize the change of production mode through automation. There are many industries in an economy, and the critical conditions for automation of different industries are different. Therefore, the degree of productivity growth and automation will assume a continuous function relationship. When the degree of automation is high, the share of capital returns in the economy is higher, so when the economy is in the optimal growth path, the growth rate will mainly depend on two conditions: the growth rate of productivity, and the share of return on capital in the economy. , higher productivity, and higher share of return on capital will allow the economy to grow faster.

Aghion et al (2017) conducted a comprehensive analysis of the possible impact of artificial intelligence on economic growth. Their analysis is based on the two effects of the “artificial intelligence revolution”—automation and Bowmore’s disease. On the one hand, as with any other technological advancement, the application of artificial intelligence will accelerate the automation process while leading to productivity gains. This will lead to a reduction in the use of manpower in the production process, which will increase the share of the return on capital in the economy. On the other hand, the "artificial intelligence revolution" will also encounter the so-called "Baumol's disease," that is, the increase in the cost of non-automated departments, which will lead to a reduction in the share of capital return in the economy. In general, as the economy develops, the impact of backward sectors of the economy on economic development will become more important. Under this condition, the impact of "Bowmore disease" will become even more negligible.

Combining the two effects, the impact of the use of artificial intelligence on economic growth will be uncertain. Although the use of artificial intelligence can certainly increase the productivity growth rate, at least in the short term, its impact on the share of return on capital is uncertain. Therefore, it is not sure how the economic growth rate will change.

Under normal conditions, the return share of capital will not rise indefinitely. At steady state, it will maintain a value less than 1. At this time, the speed of economic growth will mainly depend on the rate of change of productivity. Based on this, it can be concluded that how artificial intelligence affects economic growth will mainly depend on its influence on the rate of technological progress. If artificial intelligence brings only a short-term shock, it will only produce a one-time increase in productivity, and its effect will be temporary. If the application of artificial intelligence will bring about a continuous increase in productivity, then the economic growth rate will continue to increase, resulting in "economic singularity." According to several authors, the most critical condition for the emergence of “economic singularities” is to break through the bottleneck of knowledge production. Whether this can be achieved depends mainly on whether artificial intelligence can truly replace human knowledge production.

In the paper, several authors also discussed the distributional effects of growth. In their view, the application of artificial intelligence technology will lead to the growth of "technology-biased type," benefiting high-skilled workers and hurting low-skilled workers. The change in organizational structure caused by technology will reinforce this effect. Enterprises that use intensive technology will pay higher wages to employees within the company, and outsource some lower-tech production processes to lower wages. Low-skilled workers. The income distribution effect caused by these factors will not be overlooked.

It is worth mentioning that in Aghion et al's (2017) discussion, a key factor in determining the impact of artificial intelligence on growth is how artificial intelligence will affect innovation and knowledge production, but several authors Did not do more analysis. Agrawal et al (2017)'s paper complements this. This paper draws lessons from Weitzman's (1998) view that the process of knowledge production is largely a combination of the original knowledge, and the development of artificial intelligence not only helps people discover new knowledge, but also helps People effectively combine existing knowledge. Several authors implanted the process of knowledge combination in the model of Jones (1995) and used this new model to analyze the influence of artificial intelligence technology. It was found that the introduction of artificial intelligence technology will enable the economy to achieve significant growth by promoting a combination of knowledge.

2. Arguments on Artificial Intelligence and Economic Growth

There is much controversy about how artificial intelligence will affect economic growth. In this section, we will discuss two important issues. The first argument is whether artificial intelligence technology can really bring about economic growth. The second argument is whether artificial intelligence technology can really trigger the arrival of "Economic Singularity."

(1) Can artificial intelligence bring about economic growth?

The discussion on this issue is actually a continuation of the discussion on Solow Paradox. "Solo's Paradox", also known as Productivity Paradox, was proposed by Robert Solow when he discussed the influence of computers. At the time, he lamented that technological change can be seen everywhere, but statistical data did not show the impact of technology on growth. Since then, many studies have supported this observation by Solow and believe that the emergence of new technologies including computers and the Internet has not had a substantial impact on economic growth.

Representatives of such views are Tyler Cowen and Robert Gordon. Cowen pointed out in a bestseller that the computers and internet technologies that are considered to be very important have not made breakthroughs in productivity as in the previous technological revolution, and have seen from the current technological development, all "low hanging fruits". All have been taken away, so the economy will be stuck in a long period of "great stagnation." Gordon's analysis of the long-term trends in the economic growth of the United States reveals that recent technological advances have in fact only brought about a very low productivity improvement.

The rise of artificial intelligence technology also encountered the challenge of "Solo's paradox." Although intuitively speaking, artificial intelligence has had an important impact on all aspects of production and life, but so far, empirical evidence is equally difficult to confirm this effect. In a famous debate, Gordon and other scholars questioned the role of artificial intelligence, saying that people's expectation is obviously too high.

Regarding the questioning of "technical skeptics", the "techno-optimists" represented by Brynjolfsson clearly expressed their opposition. According to Brynjolfsson and his collaborators, modern technologies such as computers and the Internet have undoubtedly played a key role in increasing productivity and promoting economic growth. The impact of new technologies such as artificial intelligence may be even greater.

As to why the contribution of technologies such as artificial intelligence cannot be seen in statistics, Brynjolfsson et al (2017) gave a detailed discussion. In their view, there are four possible reasons that can be used to explain the deviation between people's subjective perception of technological progress and statistical data.

The first explanation is "false hopes", that is, people do overestimate the effect of technological progress. In fact, technology does not bring about the productivity improvement that people expect.

The second explanation is "mismeasurement", which means that statistical data does not really reflect the output of technological progress, and thus underestimates its growth effect.

The third explanation is “concentrated distribution and rent dissipation”, that is, although new technologies such as artificial intelligence can actually increase productivity, only a few star companies enjoy the resulting benefits. the benefits of. This not only exacerbates the inequality of income distribution, but also allows a small number of companies to obtain higher market power, which in turn leads to a decline in productivity.

The fourth explanation is the implementation lag. The play of new technologies requires the supporting technology, infrastructure, and organizational structure adjustment as the basis. In view of the current situation, these supporting tasks are relatively lagged, and as a result, the power of artificial intelligence may not be fully utilized.

Several authors examined each of the four possible explanations one by one and found that the last explanation was the most convincing. Therefore, they believe that the role of artificial intelligence can not be ignored, but the lagging supporting work at the current stage has limited its role. With the completion of related supporting work, the power of the "Artificial Intelligence Revolution" will gradually be released.

(2) Will artificial intelligence bring "economic singularities"?

Singularity was originally a mathematical term referring to a point that was not well-defined (eg, tends to infinity) or that had strange properties. The futurist Kurzweil borrowed the term in his own book to refer to artificial intelligence that transcends humanity and triggers a critical moment in the dramatic changes in human society. The so-called “economic singularity” refers to a key point in time. When this point is crossed, the economy will continue to grow and the growth rate will continue to accelerate.

In history, many economic masters have had problems with “economic singularities,” and Keynes, the founder of macroeconomics, and Herbert Simon, the Nobel Prize winner, are among them. Although none of these embarrassment has become a reality until now, with the development of artificial intelligence technology, the discussion about “economic singularities” has started to soar. Some “technical optimist” scholars believe that “artificial singularities” will soon come as artificial intelligence can significantly increase productivity and can accomplish many tasks that humans cannot accomplish.

This "technical optimist" view has caused a lot of controversy. Nordhaus (2015) questioned this in terms of experience. Nordhaus pointed out: First of all, as the new technology matures, their prices have dropped sharply, so their contribution to the economy has also rapidly declined. This means that relatively backward industries, rather than new industries, will become the key to economic growth. Second, although people have given many hopes for new technologies such as the Internet and artificial intelligence, they have not actually brought about a substantial increase in productivity. Again, at least from the reality of the United States, the current prices of investment products did not experience a rapid decline, and investment did not show rapid growth.

Based on the above analysis, Nordhaus thinks that "economic singularity" may still be just a distant dream. Aghion et al (2017) theoretically analyzed “economic singularities”. They believe that whether or not the "economic singularity" can come depends on whether the bottleneck of knowledge growth can be broken. Although the endogenous growth model has demonstrated that knowledge can be produced as a product, this process requires the participation of people. As economic growth progresses, population growth slows down, and manpower that can be used as a factor of production to invest in knowledge production processes will also be reduced. Unless artificial intelligence can replace humans for creative work and knowledge production, this important bottleneck can hardly be broken. At least for now, artificial intelligence has not yet reached this level.

(II) Artificial Intelligence and Employment

Technological advancement will lead to "technical unemployment" while promoting productivity growth. As a revolutionary technology, artificial intelligence is no exception. Compared with previous technological revolutions in the past, the “artificial intelligence revolution” will have a broader impact on employment, and the intensity will be greater and last longer.

At present, the possible impact of artificial intelligence on employment has become an important policy topic, and many literatures have discussed this. It should be pointed out that since artificial intelligence is usually treated as an enhanced version of automation when discussing the impact of artificial intelligence on employment and income distribution, in the following two sections, we introduce literature on the impact of artificial intelligence. The literature on automation and robotic influences will also be introduced.

1. Theoretical analysis of the effects of artificial intelligence and automated employment

The ALM model proposed by Autor et al. (2003) is a benchmark model for studying the effects of artificial intelligence and automated employment. In the ALM model, production requires two kinds of tasks--a stylized task and an unstructured task. The stylized task requires only low-skilled labor, while the non-stylized task requires highly-skilled labor.在几位作者看来,自动化只能用来完成程式化任务,而不能用来完成非程式化任务,因此它对低技能劳动形成了替代,而对高技能劳动则形成了互补。在这种假设下,自动化的冲击将是偏向性的,它对低技能劳动者造成损害,但却会给高技能劳动者带来好处。 Frey and Osborne(2013)对ALM模型进行了拓展。在新的模型中,而非程式化任务则既需要程式化劳动需要高技能劳动和低技能劳动的共同投入。在这种设定下,自动化对于高技能劳动者的作用将是不确定的,在一定条件下它们也会受到自动化的损害。

Benzell et al(2015)在一个跨期迭代(OLG)模型中讨论了机器人对劳动力进行替代的问题。他们指出,在一定条件下,机器人可以完全替代低技能工作,并替代一部分高技能工作,这会导致对劳动力需求的减少和工资的下降。虽然在采用机器人后,由生产率提升会带来的价格下降可以在一定程度上改善劳动者福利,不过从总体上讲它并不能完全弥补就业替代对劳动力造成的损害。因此,几位作者认为机器人的使用可能会带来所谓的“贫困化增长”(Immiserizing Growth)——虽然经济增长了,但社会福利却下降了。为了防止这种现象的发生,几位作者建议要推出针对性的培训计划,并对特定世代的人群进行补贴。

Acemoglu and Restrepo构造了一个包括就业创造的模型。在模型中,自动化消灭某些就业岗位的同时,也会创造出劳动更具有比较优势的新就业岗位,因此其对就业的净效应要看两种效应的相对程度。他们发现,在长期均衡的条件下,结果取决于资本和劳动的使用成本。如果资本的使用成本相对于工资足够地低,那么所有职业都将被自动化;反之,自动化就会有一定的界限。此外,几位作者还指出,如果劳动本身是异质性的,那么自动化的进行还将导致劳动者内部收入差异的产生。

2、关于人工智能和自动化就业影响的实证分析

Autor et al (2003)对1960-1998年的美国劳动力市场进行了分析。结果发现在1970年之后,“计算化”(Computerization)导致了“极化效应”——对程式化工作的需求大幅下降,但同时导致了对非程式化工作需求的增加。尤其是在1980年之后,这种趋势更加明显。Goos and Manning(2007)利用英国数据对ALM模型的结论进行了检验,结果发现技术进步在英国也导致了“极化效应”的出现。随后,Autor and Dorn(2013)、Goos et al(2014)等文献分别对美国和欧洲的数据进行了分析,也同样发现了“极化效应”的存在——在技术进步的冲击下,大批制造业的就业机会被服务业所抢占。

Graetz and Michaels(2015)分析了1993-2007年间17个国家的机器人使用及经济运行状况。发现平均而言机器人的使用让这些国家的GDP增速上涨了0.37个百分点。同时,机器人的使用还让生产率获得了大幅增加,并减少了中、低端技能工人的劳动时间和强度。Acemoglu and Restrepo(2017)利用1990年到2007年间美国劳动力市场的数据进行了研究。结果发现,机器人和工人的比例每增加千分之一,就会减少0.18%-0.34%的就业岗位,并让工资下降0.25%-0.5%。

3、关于人工智能和自动化就业影响的预测和趋势分析

除了实证研究外,也有不少学者采用不同的方法对人工智能对就业的影响进行了预测,其结果相差很大。Frey and Osborne(2013)曾对美国的702个就业岗位被人工智能和自动化替代的概率进行了分析,结果表明47%的岗位面临着被人工智能替代的风险。Chui,Manyika,and Miremadi, (2015)则预测,美国45%的工作活动可以依靠现有技术水平的机器来完成;而如果人工智能系统的表现可以达到人类中等水平,该数字将增至58%。相比之下,Arntz, M., Gregory,T., and Zierahn(2016)的预测则要乐观得多,他们认为OECD国家的工作中,只有约9%的工作会被取代。在国内,陈永伟和许多(2018)用Frey and Osborne(2013)的方法对中国的就业岗位被人工智能取代的概率进行了估计,结果显示在未来20年中,总就业人的76.76%会遭受到人工智能的冲击,如果只考虑非农业人口,这一比例是65.58%。

除了基于计量方法的预测外,也有一些经济史学者根据历史经验对人工智能的就业影响进行了分析。在一次麻省理工学院组织的研讨会上,Gordon指出从第一次工业革命以来的这250年间,还没有哪个发明引起了大规模的失业。尽管工作岗位持续地在消失,却有更多的就业机会涌现了出来。在他看来,同样的机制将会保证“人工智能革命”并不会造成剧烈的冲击。而Mokyr则认为,随着经济的发展,服务性行业的比例将会上升,这些行业相对来说较难被人工智能所替代。即使人工智能替代了其中的一部分岗位,但老龄化等问题会带来巨大的劳动力需求,由此提供的就业岗位将足以抵消人工智能带来的影响。

此外,还有一些学者认为在分析人工智能的就业影响时,应当综合考虑其他各种因素。例如Goolsbee(2018)认为现有的研究大多是从技术可行性角度去思考人工智能的就业影响,而没有分析价格因素和调整成本,也没有考虑冲击的持续时间。显然,如果忽略了这些因素,只是抽象地说人工智能会替代多少劳动力,其政策意义将大打折扣。

4、对于人工智能就业影响的政策探讨

尽管不同学者关于“人工智能革命”影响的估计存在很大差异,但大部分学者都认为,同历史上的各次技术革命一样,“人工智能”在长期将会创造出足够多的新岗位以代替被其摧毁的岗位,因此问题的关键就是通过政策平滑好短期的冲击,让就业结构完成顺利转换。

应对短期就业冲击的最重要政策是加强教育。很多研究指出,“人工智能革命”对就业的最大影响并不是让就业岗位绝对减少了,而是从旧岗位被淘汰的那部分劳动者不适应新岗位。因此,为了让劳动者们适应新岗位,政府应当负责提供教育和职业指导。由于“人工智能革命”的冲击是持续性的,因此相关的教育也应当有持续性。为了解决失业人员的培训支出,可以探索“工作抵押贷款”,让失业人员以未来获得的工作为抵押来获取贷款,用以进行相关培训。

(三)人工智能与收入分配

人工智能可能通过多个渠道对收入分配发生影响。首先,从理论上讲,人工智能是一种偏向性的技术(Directed Technical Change或Biased Technical Change),它的使用会对不同群体的边际产出产生不同作用,进而影响他们的收入状况。这中效应体现在两个层次上,第一个层次是在不同要素之间,这主要会影响不同要素回报的分配;第二个层次是在劳动者内部,这主要影响不同技能水平的劳动者的收入分配。其次,人工智能的使用还会对市场结构造成改变,让一些企业获得更高的市场力量,进而让企业拥有者获得更多的剩余收入。当然,以上这些效应最终如何起作用,还和相关的政策有很大关系。

1、人工智能对于要素回报的影响

要素回报的差异是造成收入分配差别的最主要原因之一。近年来,资本回报率在全世界范围内都呈现出了增加的趋势,更多的收入和财富向少数资本所有者聚集,这导致了不平等的加剧。而人工智能技术的应用,则可能强化这种要素收益的不平等。

人工智能是一种“技术偏向性”的技术。一方面,它的普及将会减少市场上对劳动力的需求,进而降低劳动力的回报率;而与此同时,作为一种资本密集型技术,它可以让资本回报率大为提升。在这两方面因素的作用下,资本和劳动这两种要素的回报率差别会继续扩大,这会引发收入不平等的进一步攀升。

2、人工智能对不同劳动者的影响

技术的偏向性不仅体现在不同生产要素之间,还体现在劳动者群体内部,不同技能劳动者在面临技术进步后,其收入变化会有很大差异。从性质上看,人工智能是技术偏向性的,它对于不同就业岗位的冲击并不相同。人工智能的一个重要作用是自动化,而目前已有很多研究证明了自动化对不同技能劳动者带来的不同影响。

在现阶段,遭受自动化冲击较为严重主要是那些以程式化任务为主,对技能要求较低的职业。自动化的普及不仅压低了从事这些职业的劳动者的收入,还造成了相当数量的相关人员失业。而如此同时,自动化对那些非程式化、对技能要求较高的职业,则主要起到了强化和辅助作用,因此面对“人工智能革命”的冲击,从事这些职业的劳动者的收入不仅没有下降,反而出现了上升。尽管关于人工智能的技能偏向性的研究还较少,但从逻辑上讲,作为一种实现高级自动化的技术,它也将会产生类似的效应。

需要指出的是,随着人工智能技术的发展,自动化的范围已经不再像过去那样局限于程式化较强,对技能要求较低的职业,很多程式化较低、对技能要求很高的职业,如医生、律师也面临着自动化的冲击。在这种背景下,当分析自动化的影响时就需要对自动化的类别进行分析。如果自动化是对低技能劳动进行替代,那么它将会扩大工资的不平等;而如果自动化是对高技能劳动进行替代,那么它或许将有助于缩小收入的不平等。

3、人工智能对利润分配的影响

除了改变要素的边际收益外,人工智能还会可能通过另一条间接渠道——改变市场力量来对收入分配产生影响。

经济学的基本理论告诉我们,当市场结构不是完全竞争时,市场中的企业就可能获得经济利润,而经济利润的高低则和企业的市场力量密切相关。近年来,世界各国的市场结构都呈现出了集中的趋势,大量占据高市场份额的“超级明星企业”(Superstar Firms)开始出现,并凭借巨大的市场力量获得巨额利润。

不少学者认为,高技术的使用是导致“超级明星企业”一个重要原因,而人工智能作为一种重要的新技术显然会强化这一趋势。不过,就笔者所知,目前还没有文献对人工智能影响收入分配的这一渠道进行过专门的实证分析,因此这种猜测暂时只存在于理论层面。

4、政策对人工智能分配效应的影响

技术变迁的收入分配效应必然受到政策因素的影响,合理的政策措施可以让技术变迁过程更有包容性,使所有人更好地共享技术变迁的成果。Korinek and Stiglitz(2017)曾对“人工智能革命”中的分配政策进行过讨论。他们指出,尽管像人工智能这样的技术进步可以让社会总财富增加,但由于现实世界中的人们不可能完全保险,也不可能进行无成本的收入分配,因此就难以让这些技术进步带来帕累托改进,在一些人因技术进步受益的同时,另一些人则会受到损害。为了扭转这种情况,政策的介入是必要的。政策必须对技术进步带来的两种效应——剩余的集中和相对价格的变化做出回应,而为了达到目的,税收、知识产权政策、反垄断政策等政策都可以发挥一定作用。Kaplan(2015)对相关收入分配政策进行了全面探讨。他建议,考虑到人工智能对不同人群带来的不同影响,应该考虑对那些因这项技术获益的人征税,用来补贴因此而受损的人们。Cowen(2017)指出,良好的社会规范将有助于政策作用的发挥,因此在进行收入分配时,必须要注意相关的社会规范的培育。

(四)人工智能与产业组织

毫无疑问,人工智能技术的发展将对产业组织和市场竞争产生极为显著的影响。它将通过影响市场结构、企业行为,进而影响到经济绩效,而所有的这些现象都将对传统的规制和竞争政策提出新的挑战。

1、人工智能对市场结构的影响

人工智能对于市场结构的影响是通过两个渠道进行的。

第一个渠道是技术的直接影响。使用人工智能技术的企业可以获得生产率的跃升,这将使它们更容易在激烈的市场竞争中胜出。同时由于人工智能技术需要投入较高的固定成本,但边际成本却较低,因此这就能让使用人工智能的企业具有了较高的进入门槛。这两个因素叠加在一起,导致了市场变得更为集中。

第二个渠道是技术引发的企业形式变革。企业的组织形式是随技术的变化而变化的。在人工智能技术的冲击下,平台(Platform)正在成为当今企业组织的一种重要形式。由于平台通常具有“跨边网络外部性”,因而会导致“鸡生蛋、蛋生鸡”似的正反馈效应,这让平台企业可以迅速膨胀占领市场,并形成一家独大的现象。 综合以上两种因素,人工智能技术的迅速发展推动了一批“超级巨星企业”企业的出现,并让市场迅速变得高度集中。

需要指出的是,人工智能对于市场结构的影响不仅反映在横向关系上,还反映在纵向关系上。Shapiro and Varian(2017)指出,由于机器学习的特殊性,那些采用机器学习的企业更倾向于垂直联合以获取更多数据并削减机器学习的成本。根据这一理论我们可以预见,随着人工智能技术的发展,大型平台企业对下游的并购趋势将会加强,而推动这种并购整合的动因将不再是争夺直接的利润或市场份额,而是争夺数据资源。

2、人工智能对企业行为的影响

人工智能技术的发展将会对企业的不少行为发生影响。很多以前难以采用的策略将会变成现实。

一个例子是算法歧视(Algorithmic Discrimination)。在传统的经济学中,由于企业的信息越苏,“一级价格歧视”只在理论上出现。而在人工智能时代,借用大数据和机器学习,企业将有可能对每个客户精确画像,并有针对性地进行索价,从而实现“一级价格歧视”,获得全部的消费者剩余。即使企业不进行“一级价格歧视”,人工智能技术也能够帮助他们更好地进行二级或三级价格歧视,从而更好地攫取消费者剩余。

另一个例子是算法合谋(Algorithmic Collusion)。合谋一直是产业组织理论和反垄断法关注的一个重要问题。市场上的企业可以通过合谋来瓜分市场,从而提升企业利润的目的。产业组织理论的知识告诉我们,企业的这种合谋会导致产量减少、价格上升、消费者福利受损。但是,在传统的经济条件下,由于存在信息交流困难以及“囚徒困境”等问题,合谋是很难持久的。尽管从理论上讲,重复博弈机制可以帮助企业合谋的实现,但事实上由于难以监督违约、难以惩罚违约,以及难以识别经济信息等问题的存在,这也很难真正达成。但随着人工智能技术的发展,过去很难达成的合谋将会变成可能。与过去不同的是,企业之间的合谋不再需要相互猜测合谋伙伴的行动,也无需要通过某个信号来协调彼此的行为。只要通过某种定价算法,这些问题都可以得到解决。在这种背景下,企业数量的多少、产业性质等影响合谋难度的因素都变得不再重要,在任何条件下企业都可以顺利进行合谋。

除了算法歧视以及算法合谋外,人工智能技术的发展还会引发很多新的竞争问题。例如,平台企业可以借助搜索引擎影响人们的决策,或者通过算法来影响人们在平台上的匹配结果。

(五)人工智能与贸易

人工智能对于贸易产生的影响将是多方面的:其一,作为一种重要的技术进步,人工智能将对要素回报率产生重大影响,并改变不同要素之间的相对回报状况,这会让各国的动态比较优势状况发生明显的变化。其二,作为一个新兴的产业,人工智能的相关技术和人才也成为了贸易的重要对象,而各国的战略性贸易政策将会对该产业的发展产生关键作用。其三,在微观上,人工智能的使用也将影响企业的生产率状况,根据“新新贸易理论”,这将会影响企业的出口决策。

不过,目前在现有文献中直接讨论人工智能与国际贸易的文献还相对较少,就笔者所知,Goldfarb and Trefler(2018)是目前唯一一篇对这一问题进行专门讨论的论文。在这篇论文中,两位作者首先指出了人工智能产业的两个重要特点:规模经济以及知识密集。人工智能产业对于数据的依赖非常强,规模经济的属性决定了它们在人口基数更为庞大、各类交易数据更为丰富的国家(如中国)更容易得到发展。而知识密集的特征则决定了知识的扩散、传播方式将对各国人工智能的发展起到重要影响。

在认识了人工智能产业的基本特征后,两位作者讨论了战略性贸易保护政策在发展人工智能产业过程中的有效性。在两位作者看来,传统的战略性贸易保护文献有一个重要的缺陷,即只有当存在着利润时,战略性贸易保护政策才是起作用的。但是,一旦产业由于政府的保护而产生了超额利润,只要进入门槛足够低,更多的企业就会进入这个产业,直至利润被压缩到零。而在这种情况下,战略性贸易保护政策就失效了。由于人工智能产业具有很强的网络外部性,所以在这个产业中有企业先行发展起来,其规模就为其构筑起很高的进入门槛,这意味着即使产业有很高的利润也不会有新企业继续进入。在这种条件下,战略性贸易保护政策就会变得更有效了。

两位作者通过几个模型对几类政策,如补贴政策、人才政策,以及集群政策的影响进行了讨论。他们指出,这些政策究竟是否能成功,主要要看人工智能所依赖的知识外部性究竟来自于本国范围还是世界范围。如果人工智能依赖的知识外部性主要来自于本国,那么政府就可以通过产业政策和战略性贸易保护政策对企业进行有效扶持,从而让企业在世界范围内更具有竞争力。但如果人工智能依赖的知识外部性是全世界范围内的,由于知识的扩散会相当容易,因此以上政策的作用就不会明显。

在论文的最后,两位作者着重对隐私政策进行了讨论。从经验上看,更强的隐私保护会限制企业对数据的获取,进而会阻碍以数据为关键资源的人工智能产业的发展。因此,在实践中,隐私保护政策经常被作为隐性的贸易保护政策来对付国外企业。但这两位作者看来,这类政策也同时会损害本国企业,因此是不可取的。他们建议,出于支持本国企业的目的,政府可以采用其他一些扶持政策,例如数据本地化规则、对政府数据访问的限制、行业管制、制定本地无人驾驶法规,以及强制访问源代码等。

(六)人工智能与法律

人工智能的兴起带来了很多新的法律问题。

例如,人工智能在一定程度上可以替代或辅助人进行决策,那么在这个过程中人工智能是否应该具有法律主体地位?

在应用中,人工智能需要利用其他设备或软件运行过程中的数据,那么谁是这些数据的所有人,谁能够作出有效的授权?

在遭遇人工智能造成的事故或产品责任问题时,应该如何区分人工操作还是人工智能本身的缺陷?

对于算法造成的歧视、合谋等行为应当如何应对?

......

这些问题都十分实际,但却充满了争议。限于篇幅,笔者只想对两个问题进行专门讨论,对于更多人工智能引发的法律问题的探讨,可以参考Pagallo(2013),Erzachi and Stucke(2016),Stucke and Grunes(2016)等著作。

1、人工智能带来的隐私权问题

现阶段人工智能的应用是和数据密不可分的。例如商家在利用人工智能挖掘消费者偏好时,就必须依赖从消费者处搜集的数据(包括身份信息、交易习惯数据等)。对于消费者来讲,让商家搜集这些数据将是有利有弊的——一方面,这些数据可以让商家更充分地了解他们的偏好,从而为他们更好地服务;另一方面,消费者的这些数据被搜集后也会带来很多问题,例如可能被商家进行价格歧视,受到商家的推销骚扰,在部分极端的情况下甚至可能因此而受到人身方面的威胁。

在数据的搜集和交换不太频繁的情况下,消费者在遭受因数据引发的麻烦时很容易追踪到责任源头,因此他们可以有效地对出让数据而带来的风险进行成本收益分析。在理性决策下,一些消费者会选择自愿出让自己的数据。

但是,随着大数据和人工智能技术的发展,这种情况发生了改变:

(1)商家在搜集了数据后可以更持久保存,可以在未来进行更多的使用,因此消费者出让数据这一行为带来的收益和遭受的累积风险之间将变得十分不对称;(2)由于现在商家搜集数据的行为已经变得十分频繁,当消费者遭受了数据相关的问题后也很难判断究竟是哪个商家造成的问题,因此事实上就很难进行追责;(3)商家在搜集消费者数据后,可能并没有按照其事先向消费者承诺的那样合理使用数据,而消费者却很难惩罚这种行为。

在上述背景下,如何对数据使用进行有效治理,如何在保护消费者合法权益的基础上有效利用数据就成为了一个需要尤其值得关注的问题。目前,对于人工智能条件下如何保护消费者隐私的争议很多,有学者认为应当由政府进行更多监管,有学者认为应当由企业自身进行治理,有学者则认为应该由民间团体组织治理。总体来讲,几种思路都各有其利弊,因此这一问题目前仍然是一个开放性问题。

2、人工智能的产品责任问题

人工智能及使用人工智能技术的设备(如机器人)可以大幅度提高生产率,但同时也会更大的使用风险。在这种背景下,界定人工智能的产品责任,明确一旦发生了事故,究竟人工智能制造者需

3" Compression Driver

3 Inch Compression Driver,Professional Horn Driver,3 Inch Titanium Tweeter,Neodymium Driver

Guangzhou BMY Electronic Limited company , https://www.bmy-speakers.com