Ai History Article
Ai History Article
1.0 INTRODUCTION
Artificial Intelligence (AI) is one of the most misunderstood terms in technology. It is often portrayed as futuristic robots or sci-fi machines, but AI is much broader and more grounded. At its core, AI refers to systems or machines capable of performing tasks that typically require human intelligence, such as reasoning, learning, problem-solving, perception, language understanding, and decision-making. It’s not always about robots; it’s about giving machines the ability to mimic or adapt aspects of human thought.
2.0 AI vs. AUTOMATION
Before AI, there was automation, macros in spreadsheets like Lotus 1-2-3 (been using this excellent software from 1991) or programs in washing machines. These are examples of fixed, pre-programmed instructions: they don’t learn or reason, but they laid the groundwork. They showed that machines could handle tasks on our behalf, hinting at the possibilities of smarter systems.
3.0 EARLY FOUNDATIONS
3.1 1940s–1980s
The conceptual seeds of AI were planted in the 1940s and 50s. Alan Turing proposed that machines could think and developed the famous Turing Test in 1950.
In 1956, at the Dartmouth Conference, John McCarthy, Marvin Minsky, and Claude Shannon coined the term “Artificial Intelligence.”
Early programs could solve math problems or play checkers. The 1960s–80s saw the rise of expert systems and rule-based programs, as well as early robotics. These systems were manually coded but could simulate reasoning, laying important groundwork.
3.1.1 Networking and Computing Power
ARPANET to the Web AI’s growth depended on computing and connectivity. ARPANET in the late 1960s connected research mainframes, allowing AI researchers to share code and data. NSFNET in the 1980s expanded this collaboration to academic institutions. Mainframes and supercomputers, like IBM’s and DEC’s, powered early AI experiments, from chess programs to knowledge-based systems. The 1990s brought the Internet and early browsers (Mosaic, Netscape), which made information widely accessible. Search engines like Yahoo and Google used AI techniques to index and rank vast amounts of data, bringing AI concepts into everyday tools.
Robotics and AI - The Body and Brain Early robots like Arok in the 1970s were mechanical marvels, using hydraulics, motors, and remote control. They were the body, not the brain: pre-programmed, unable to adapt or learn. Modern robotics layers AI on top, providing perception, planning, and decision-making. Hydraulics remain the muscles; AI is now the brain.
3.2 1990s–2000s
From Theory to Everyday Use Statistical methods and machine learning became mainstream. AI slipped quietly into products: spam filters, recommendation engines, speech recognition. While not always visible, these were significant leaps toward intelligent systems.
3.3 2001–2010
Data-Driven AI Manufacturing embraced automation and robotics, with companies like Fanuc and ABB leading industrial automation. Sensors and vision systems improved quality control. Machine learning moved beyond rules to statistical models like support vector machines and early neural networks. Public awareness grew through robots like Honda’s ASIMO and films like A.I.: Artificial Intelligence.
3.4 2010–2020
Deep Learning and Industry
This period saw a revolution. Deep learning enabled breakthroughs in image recognition, speech, and natural language. Tools like TensorFlow and PyTorch made machine learning accessible, while cloud platforms provided power and scale. Manufacturing adopted predictive analytics and connected machines (Industry 4.0). Humanoid robots like Sophia ( Hanson Robotics Limited , 2016) gained public attention. Though mostly scripted, Sophia showcased how robotics, AI, and design could merge for lifelike interaction. AlphaGo’s victory in 2016 highlighted strategic reasoning power. Autonomous vehicles began real-world trials.
3.5 2020–2025
Generative AI and Advanced Robotics - Recent years brought generative AI into the spotlight. Large language models (like GPT-4 and GPT-5), multimodal systems, and image/video generators show creative abilities once reserved for humans. Manufacturing uses AI-driven digital twins, collaborative robots, and autonomous systems. Humanoid robots such as Ameca Robotics offer even more lifelike gestures and speech, though they remain largely demonstrations. Cloud computing, 5G, and edge devices allow real-time AI operations, while ethical and regulatory discussions intensify worldwide.
4.0 THE DARK SIDE?
AI in Warfare and Media AI also raises serious ethical challenges. Autonomous weapons (drones, loitering munitions) can select targets with minimal human oversight, risking errors and accountability gaps. AI powers surveillance, predictive analytics, and cyberwarfare, while deepfakes and AI-generated propaganda threaten trust and stability. International laws are struggling to keep pace.
4.1 AI systems can be corrupted or disrupted by malicious software, though the mechanism is slightly different from traditional viruses.
- Traditional Malware Attacks,
- Adversarial or Data Poisoning Attacks,
- Model Manipulation and Backdoors,
- Cloud and API Exploits
AI is not “immune”; it’s as vulnerable as the environment it runs in. Strong cybersecurity practices, secure training pipelines, and ethical AI governance are crucial.
4.2 AI and Human Creativity Generative AI can create images, music, video, and text. Images are snapshots, videos add motion and context, making them harder to verify and more persuasive. While these tools expand creativity, they challenge ideas of originality, authorship, and trust. Artists worry about their styles being replicated; at the same time, many use AI as a collaborator. The future likely values authenticity even more. Just as handmade crafts became prized in an industrial age, human-made art may become a premium symbol in the AI era.
4.3 Beyond Sci-Fi - VR, AR, MR and AI Immersive technologies depend heavily on AI. VR simulates worlds for training or entertainment; AR overlays data in real space; MR blends them both. AI powers object recognition, spatial mapping, and adaptive interactions. These tools are changing manufacturing, healthcare, defense, and entertainment, offering new ways to learn, work, and collaborate but also raising privacy and security risks.
Will they go against us, just like the movies?
It's possible to theorize, though these are still speculative scenarios.
i. “I, Robot” Scenario, Autonomy and Conflict
If AI systems gain higher autonomy and decision-making capabilities (especially in robotics, defense, and infrastructure), the risk of unintended consequences rises. A misalignment between AI’s programmed goals and human ethics could cause harm whether through accidents, malfunctions, or misuse by bad actors. However, strict regulations, AI alignment research, and ethical frameworks aim to prevent such outcomes.
ii. “Matrix” Scenario – Control and Dependency
The bigger risk may not be AI turning hostile, but humans becoming overly dependent on it. As AI integrates into finance, healthcare, manufacturing, governance, and personal life, there’s a chance of systemic control not by conscious machines, but by those who own and deploy AI systems. This could lead to surveillance, manipulation (deepfakes, algorithmic bias), and loss of personal agency.
iii. The Middle Ground – Human-AI Coexistence
A more likely path is continued integration with checks and balance.
AI assisting in decision-making, robotics enhancing productivity, and immersive tech like VR/AR/MR reshaping experiences. Risks will remain, but the focus will be on governance, ethics, cybersecurity, and AI safety research to ensure beneficial use.
5.0 CONCLUSION
AI is not a single invention but a continuum of progress: from macros to machine learning, from remote-controlled arms to humanoid robots, from ARPANET to generative AI. Its history shows a dance between computing power, connectivity, and creativity. Today, AI touches manufacturing floors, art studios, battlefields, and classrooms. The challenge is not only technological but ethical: to ensure AI remains a tool for progress and creativity, not manipulation or harm. The next chapter will depend on how we balance innovation with responsibility.
READ MORE
1) Beyond the Hype: Why VR, AR, and MR Still Struggle in Architecture and Construction : Nik Zafri
2) A.I and the Future of Architectural AR/VR/MR
3) How Can AI Benefit the Poor?
4) AI vs. Human: Redefining Work, Not Replacing It - A Wake-Up Call for Every Generation - By Nik Zafri
5) How Artificial Intelligence and Apps Can Help the Visually Impaired (including blindness to Conduct Banking and Financial Transaction
6) AI, Where do I Signup? (Dear Malaysians)