Saturday, May 9, 2026
English News
  • Hyderabad
  • Telangana
  • AP News
  • India
  • World
  • Entertainment
  • Sport
  • Science and Tech
  • Business
  • Rewind
  • ...
    • NRI
    • View Point
    • cartoon
    • My Space
    • Education Today
    • Reviews
    • Property
    • Lifestyle
E-Paper
  • NRI
  • View Point
  • cartoon
  • My Space
  • Reviews
  • Education Today
  • Property
  • Lifestyle
Home | View Point | Opinion When Philosophy Meets Ai

Opinion: When Philosophy meets AI

Since AI lacks intuition and common sense, it can benefit greatly from philosophical insights that delve into meaning, reasoning, and lived experience

By Telangana Today
Updated On - 11 June 2025, 07:41 PM
Opinion: When Philosophy meets AI
whatsapp facebook twitter telegram

By B Maria Kumar

When I chose to study Philosophy for my post-graduation at the Arts College of Osmania University in Hyderabad during the early 1980s, very few of my friends were pleased with my decision. Understandably, they had concerns about my future employability, given the uncertain prospects of what was then seen as one of the most abstract disciplines in the humanities.

Also Read

  • Opinion: Human Development in the age of AI
  • Opinion: Smart or stupid for 2025?

But I was drawn toPhilosophy by an inner inclination that I could not ignore. As the years passed, I could never have imagined that, more than four decades later, this once so-called godforsaken field, of which the most celebrated credential seemed to be the endorsement of the great Socrates, the father of Western Philosophy, would rise as a vital guide in the present age of artificial intelligence (AI).

The Cornerstone

In today’s world, shaped by rapidly advancing science, economy, and technology; Philosophy has quietly reclaimed its relevance, reminiscent of the biblical verse: “The stone that the builders rejected has become the cornerstone.”

It all began when I happened to glance through an article written by Marco Argenti, the Chief Information Officer at Goldman Sachs, published on April 16, 2024, in the Harvard Business Review. In the article, he expressed a sense of unpredictability and concern about the reliability of AI systems, noting that they may not always function as intended. This observation was based on his own firsthand experience in the current technological environment. He explained that the root cause of many unintended outcomes lies in the absence of critical thinking skills, which are necessary for understanding and managing the complexities involved in AI construction.

Geoffrey Hinton, the 2024 Nobel Prize winner in Physics and widely regarded as the godfather of AI, has stated that the chances of catastrophic consequences from AI have become more real and pressing than ever

To address this issue, he suggested equipping AI professionals with a philosophical mindset while evaluating the quality of code and improving the system performance. While AI is capable of producing code that is technically correct in terms of language and structure, it does not consistently succeed when measured against the desired outcomes or broader human values.

Existential Outlook

Considering these insights, it becomes increasingly important that the cognitive disposition of AI engineers be shaped by deeper forms of reflection. Elements such as the Socratic method of questioning, logical reasoning, ethical and moral awareness, and an existential outlook should be part of their thinking process while designing the systems and writing the code.

These philosophical tools can guide them to consider not just whether the code works, but whether it serves the intended purpose responsibly. This need becomes all the more urgent in view of the warnings expressed by many distinguished thinkers around the world about the ambiguous and far-reaching consequences of AI.

Around ten years ago, the renowned physicist and cosmologist Stephen Hawking drew global attention to the existential threats emerging from AI, even in situations where no deliberate malice is involved. He emphasised how unregulated and poorly managed AI systems could lead to disastrous outcomes. Such dangers, if we visualise, could be the accidental launch of autonomous weapons, abrupt financial system breakdowns or large-scale climate disruptions with irreversible effects on human survival.

In recent times, Israeli historian Yuval Noah Harari has frequently voiced apprehension over the life-threatening possibilities, created by uncontrolled advances in AI. He has hinted at risks involving synthetic pandemics, biochemical weapons and unforeseen modifications in genetic engineering, all of which challenge our ability to oversee what we bring into existence.

Adding further urgency to this issue, Geoffrey Hinton, the 2024 Nobel Prize winner in Physics and widely regarded as the godfather of AI, has lately stated that the chances of catastrophic consequences from AI have become more real and pressing than ever. These observations compel serious thought on the dilemmas surrounding vague instructions and the lack of comprehensive risk mitigation strategies. The problem lies not only in technical imperfections but also in the absence of a sound and accountable framework to guide the safe use of AI in the real world.

Why before How

The emerging crisis arises from the fact that the breakthroughs in AI have not been matched by equal progress in the human dimension of safety. While AI engineering continues to expand rapidly, the ethical and moral frameworks necessary for its safe application often lag behind. A truly effective AI engineer, therefore, must not only possess technical expertise but also cultivate a holistic viewpoint, a judicious stance, compassion for fellow beings, and a fundamentally humanistic approach.

These existential considerations are at the heart of philosophical methods, which offer essential guidance in designing and operating AI systems prudently. This is why Marco Argenti emphasises the importance of understanding the ‘why’ before working on the ‘how’. AI may generate computer code that appears correct on the surface but fails to solve real problems when the prompts are ill-defined or the issues poorly framed.

Lacking intuition and common sense, AI stands to benefit from philosophical insights that explore meaning, reasoning, and lived experience. Logic and debate, for instance, can embed safety checks into AI systems, such as verifying the absence of humans before deploying AI-powered weapons.

Socratic questioning can help minimise emissions and prioritise environmental responsibility. Mental models can shape trading algorithms to reduce volatility, balance profits with stability, and prevent economic collapse. These are just a few examples of how philosophical intervention can infuse ethical and moral judgment into AI.

In support of this shift, an analysis by the Federal Reserve Bank of New York notes that Philosophy is gaining ground in employability, even surpassing some traditionally favoured disciplines. As brought to light in Sherin Shibu’s May 16, 2025, ‘Entrepreneur.com’ article, Philosophy graduates in the United States currently face a lower unemployment rate (3.2%) than computer science graduates (6.1%).

In this context, ongoing global efforts to create safer AI systems hold promise for addressing the existential risks of AI exceeding human control, reaffirming the growing role of Philosophy in protecting the future of humanity.

(The author, a recipient of National Rajbhasha Gaurav and De Nobili awards, is a former DGP in Madhya Pradesh)

  • Follow Us :
  • Tags
  • Artificial Intelligence (AI) and Humanity
  • Geoffrey Hinton
  • Maria Kumar
  • Opinion

Related News

  • Opinion: Congress split with DMK and the rise of Third Front

    Opinion: Congress split with DMK and the rise of Third Front

  • Opinion: Broken Promises, Denied Rights — how Modi government has failed Telangana

    Opinion: Broken Promises, Denied Rights — how Modi government has failed Telangana

  • Opinion: Boiler accidents in India are no aberration, but a regulatory failure

    Opinion: Boiler accidents in India are no aberration, but a regulatory failure

  • Opinion: China+1 strategy — less windfall, more a test for India

    Opinion: China+1 strategy — less windfall, more a test for India

Latest News

  • Arvind Kejriwal compares Modi to Aurangzeb amid ED raids in Punjab

    13 mins ago
  • Same-sex Hindu-Muslim couple seeks police protection in Gurugram

    17 mins ago
  • JBIET and Bhaskar Engineering College celebrate Placement Day 2026

    28 mins ago
  • Seven engineers reinstated in Bhopal 90-degree overbridge controversy

    29 mins ago
  • Kalasagaram hosts 12-hour Carnatic music festival in Hyderabad

    31 mins ago
  • Suvendu Adhikari says rebuilding Bengal is his top priority

    39 mins ago
  • ‘Awaken the core’ theme announced for Sunburn Festival 2026

    45 mins ago
  • ‘Years of hard work paid off’: Makhanlal Sarkar’s family feels ‘proud’ on recognition from PM Modi

    50 mins ago

company

  • Home
  • About Us
  • Contact Us
  • Privacy Policy

business

  • Subscribe

telangana today

  • Telangana
  • Hyderabad
  • Latest News
  • Entertainment
  • World
  • Andhra Pradesh
  • Science & Tech
  • Sport

follow us

  • Telangana Today Telangana Today
Telangana Today Telangana Today

© Copyrights 2024 TELANGANA PUBLICATIONS PVT. LTD. All rights reserved. Powered by Veegam