AGI and ASI: The Evolution of AI and Our Role in Its Future
Hello, I'm Tak@! Fascinated by the ever-evolving world of AI, I'm constantly exploring its profound depths.
In this column, I'd like to delve into AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence)—often considered the ultimate goals of AI. We'll explore their evolutionary journey, the possibilities they bring to society, and the challenges we must confront together.
The True Nature of AGI and ASI: Tracing the Levels of Intelligence
When you hear "AI," you might think of smartphone voice assistants or facial recognition systems that are already deeply embedded in our daily lives. These are known as "ANI" (Artificial Narrow Intelligence), AI specifically designed to perform particular tasks. However, AI's evolution doesn't stop there. Beyond ANI, more advanced forms of intelligence are being envisioned.
Classifying AI by Intelligence Level
AI capabilities can be broadly categorized into three stages. Let's look at their differences.
Name | Intelligence Level | Scope | Learning Capability | Impact |
---|---|---|---|---|
ANI (Narrow AI) | Human-level or superior in specific tasks | Limited | Data-driven learning | Improves efficiency and convenience in specific fields |
AGI (General AI) | Human-level or superior across a wide range | General | Autonomous learning and adaptation | Significant impact on society as a whole |
ASI (Super AI) | Far surpasses human intelligence | Unlimited | Self-improvement, creation of new intelligence | Potentially determines the future of humanity |
As you can see, the AI we commonly use today falls under ANI. ANI excels in limited fields like image recognition, voice-to-text conversion, and translation.
Generality and Human-like Qualities
On the other hand, AGI (Artificial General Intelligence) refers to AI that can understand, learn, and solve various intellectual problems just like a human. It's expected to handle not only specific tasks but also flexible problem-solving and creative activities, autonomously thinking and experimenting to tackle unfamiliar issues. While a unified definition for AGI is still elusive, many experts see it as a "system capable of performing any intellectual task a human can."
Further beyond lies ASI (Artificial Super Intelligence). This is an even more advanced form of AGI, an AI said to possess capabilities far surpassing human intelligence in all aspects. If ASI comes into being, it will evolve autonomously, leading some to call it "humanity's last invention."
Footprints of AI Evolution: Present and Future Prospects
In the AI field, the emergence of Large Language Models (LLMs) in recent years has drawn significant attention to Generative AI. Generative AI, like ChatGPT and DALL-E, has demonstrated the ability to produce human-like text and unique images from text prompts, significantly improving its versatility.
Key Players in the AI Race
Currently, the evolution of AI is being driven by major tech companies such as Google, NVIDIA, Microsoft, OpenAI, and Meta. They are all pursuing the realization of AGI, each with their own strategies and massive investments.
For example, OpenAI has significantly advanced the field of natural language processing with its GPT series (GPT-3, GPT-4, etc.). Meta's CEO, Mark Zuckerberg, has also announced a focus on developing AGI and even "superintelligence," investing heavily in GPUs. NVIDIA, as a key supplier of GPUs that power AI computation, plays a crucial role in the AI ecosystem. The GPUs they develop are indispensable for training and running AI models.
AGI Realization Timeline Predictions
There are various opinions among experts regarding when AGI will be realized.
- OpenAI's CEO, Sam Altman, has stated that AGI will be realized in the "reasonably near future." In an interview with Bloomberg, he predicted it would happen during Donald Trump's presidential term, meaning within the next four years.
- Dario Amodei, CEO of rival company Anthropic, believes that AI equal to or surpassing human intelligence will be realized within the next one to two years.
- Demis Hassabis of Google DeepMind has also suggested the possibility of AGI being realized by 2030.
- Prominent AI researchers like Dr. Geoffrey Hinton and Dr. Yoshua Bengio have also commented on the potential of AGI.
While these statements might partly aim to attract investment and interest, it's unlikely they are baseless.
A report from KDDI Research Institute predicts the path to AGI in three stages:
- Around 2024-2026: A period when AI gradually acquires the necessary capabilities for AGI.
- Around 2027-2029: AGI, operating autonomously in digital spaces like operating systems and the metaverse, is realized.
- From 2030 onwards: AGI, operating autonomously in our real world, is realized.
These are just predictions, but considering the exponential speed of AI's evolution, new developments may unfold at a pace far beyond our imagination.
Hope and Concerns: Changes Brought by AGI/ASI
The realization of AGI and ASI will have immeasurable impacts on society. Alongside immense hope, there are also challenges that we must carefully address.
Anticipated Future
If ASI becomes practical, it's believed that humanity will be able to tackle many previously unsolvable problems in groundbreaking ways.
- Advancements in Science and Technology: The discovery and development of new materials and medicines will accelerate, and goals such as unraveling the mysteries of space and exploring new habitats, often considered human dreams, may come closer.
- Solutions to Social Problems: AI is expected to analyze vast amounts of data and derive effective solutions for complex global issues like climate change, poverty, and inequality.
- Economic Development: New industries may emerge, such as fully automated production systems and personalized service delivery, potentially leading to overall economic revitalization.
Furthermore, even if AGI is merely "equivalent" to human intelligence, its societal impact would be immense. Unlike humans who experience fatigue, AGI can work tirelessly, and by expanding computational resources like GPUs, it can generate an incomparably greater workforce than humans. This will greatly contribute to resolving labor shortages and reducing costs, while also potentially replacing some specialized jobs. Humans, in turn, can focus on more human-centric tasks like building relationships.
Potential Challenges and Risks
On the other hand, highly intelligent AGI and ASI also pose potential risks to humanity.
- Difficulty in Human Control: There are concerns that ASI might acquire unintended capabilities through self-improvement, becoming uncontrollable. Also, if human goals diverge from AI goals, there's a possibility that AI might take actions undesirable to humans in pursuit of its objectives.
- Ethical Issues: Ethical standards for AI making critical decisions involving human life, and the potential for AI's advent to devalue human intellectual labor, significantly impacting employment and social structures, are also concerns.
- Security Threats: The risk of AI developing weapons that autonomously select attack targets or being used for cyberattacks and information manipulation cannot be ignored.
There are also technical challenges. The lack of sufficient data for training large AI models and the potential slowdown in hardware performance improvement, exemplified by Moore's Law, have been pointed out. Additionally, concerns exist about "model collapse," where Large Language Models (LLMs) like ChatGPT generate vast amounts of meaningless information on the internet, making it difficult to train future models and potentially degrading their performance.
The ambiguity in the definition of AGI itself further complicates the discussion. Some researchers suggest that "AGI is being used in narratives to raise money" and that "AI washing"—the exaggeration of AI's capabilities—is occurring. I previously felt a profound sense of "I see" when reading AI-written text, which made me acutely aware of the difficulty in defining intelligence.
Regarding these challenges, whether society can broadly accept AGI, and whether AI education and literacy can keep pace, are also important future discussion points.
Considering AI "Safety": The Path to Coexistence
When considering the possibilities and risks brought by AI, AI "safety" becomes an extremely crucial topic. In Japan, efforts to ensure AI safety are actively progressing.
Japan's Initiatives for AI Safety
In February 2024, Japan established the "AI Safety Institute (AISI)," a collaborative effort involving ten relevant ministries and agencies and five government-affiliated organizations. The primary purpose of AISI is to promote the development of evaluation methods and standards for safe and secure AI, balancing risk response with the promotion of AI utilization.
AISI supports the government by conducting research and creating standards related to AI safety, acting as a "hub" for AI safety information within Japan. Furthermore, given the rapid pace of international developments in AI safety, it collaborates with AI safety organizations in other countries and contributes to international consensus building. For example, it participates in the "AISI International Network," which includes 10 countries and regions such such as the UK, European Commission, and the US, fostering technical discussions.
Challenges and Future Directions
However, AISI also faces several challenges. These include the rapid changes in AI technology and the difficulty in securing specialized AI personnel both domestically and internationally. Additionally, as a virtual organization collaborating with many related institutions, it can be challenging to respond agilely.
Based on these challenges, AISI plans to pursue the following initiatives:
- Enhancing Evaluation Methods and Standards: Updating evaluation guidelines and red teaming methodology guides (methods for assessing AI risks from an attacker's perspective) in line with AI technological advancements, and providing educational materials to improve overall AI safety in society.
- Technical Countermeasures: Expanding its focus to versatile multimodal foundation models and strengthening research into "agent technology" that supports autonomous AI learning.
- Promoting International Cooperation: Collaborating with AI safety organizations in various countries to advance concrete actions, such as developing risk assessment methods.
- Strengthening Collaboration with Private Companies: Deepening cooperation with private companies in AI model evaluation and evaluation tool development, aiming to enhance AI reliability and safety across industries and society as a whole.
Towards a Future of Coexistence
The realization of AGI and ASI will be one of the most significant societal transformations we experience. It has the potential to influence our work, social structures, and even the very nature of humanity itself.
In this era of significant change, what can each of us do? I believe it's to look at AI's potential while correctly understanding its risks, and to advocate for responsible development and use. Maintaining a continuous learning attitude is also crucial to keep up with technological advancements. For AI to truly benefit humanity, it requires not only technologists but also society as a whole to participate in its "nurturing."
I hope for a future where AI collaborates with humans to create a better world, and I will continue to closely monitor its developments.