from Huge “foundation models” are turbo-charging AI progress The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence - Stanford Digital Economy Lab
-
- Original
- In the current economic system, when AI automation becomes widespread, it tends to concentrate wealth in the hands of those who cannot be replaced by AI.
- Furthermore, the digitalization of knowledge also increases liquidity and concentrates wealth.
-
When useful knowledge is inalienably locked in human brains, so too is the power it confers. But when it is made alienable, it enables greater concentration of decision-making and power.33
- As we move towards avoiding personalization, the concentration of wealth and power increases.
- Structural problems:
-
The risks of the Turing Trap are amplified because three groups of people—technologists, businesspeople, and policymakers—each find it alluring.
- Oh, that makes sense (blu3mo)
-
Technologists have sought to replicate human intelligence for decades to address the recurring challenge of what computers could not do. The invention of computers and the birth of the term “electronic brain” were the latest fuel for the ongoing battle between technologists and humanist philosophers.
- www(blu3mo)
- This conflict certainly exists.
- This is what I want to discuss in Baji Seminar Essay 1S Semester.
-
- Oh, that makes sense (blu3mo)
- The idea of augmentation rather than automation.
- That’s true (blu3mo)(blu3mo)
- It’s != AI, Human Augmentation.
- Society tends to lean towards automation if left unchecked, but isn’t augmentation better because it can do more and prevent the concentration of wealth?
- By augmentation, I don’t necessarily mean agency or anything like that, but rather the importance of not excluding humans from social activities.
- If automation is done too easily, humans will be excluded.
- Excluding humans from social activities leads to increased liquidity and concentration of wealth/power, which is not good.
- It’s not about employment, I see (blu3mo)
- It’s more like collaboration than augmentation.
- By augmentation, I don’t necessarily mean agency or anything like that, but rather the importance of not excluding humans from social activities.
- In other words, the argument is that by eliminating dependence on humans, we can expand what can be done, avoid the concentration of power, and be happy.
- Then, the discussion focuses on why the tendency is towards automation if left as it is.
-
In sum, the risks of the Turing Trap are increased not by just one group in our society, but by the misaligned incentives of technologists, businesspeople, and policymakers.
-
- Summary
- Normative discussion
-
The first option offers the opportunity of growing and sharing the economic pie by augmenting the workforce with tools and platforms. The second option risks dividing the economic pie among an ever-smaller number of people by creating automation that displaces ever-more types of workers.
- The latter is better.
-
- However, if left alone, there is a tendency towards the former, leading to the Turing Trap.
- Normative discussion
-
-
Wow, this article is amazing. I want to write about this VR version in Baji Seminar Essay 1S Semester.
- It doesn’t end with the argument of “people losing their jobs,” but goes a step further to explore the problem of “increased liquidity of abilities resulting in the concentration of wealth and power.”
- It’s not a vague claim like “people and AI should coexist,” but specifically addresses the form of augmentation.
- Furthermore, it discusses the tendency in the current social structure that deviates from the “good future.”
- Overall, I really like that it doesn’t have the ambiguity or lack of a “so what” feeling that is often found in critiques of technology.