Systems like ChatGPT have no inherent goals or motivations; their outputs depend entirely on how humans prompt them
By Avinash Malladhi
As artificial intelligence (AI) rapidly transforms societies and economies, India stands at a crossroads. With its immense talent pool, India is poised to lead breakthroughs in AI across diverse domains, bringing efficiencies and capabilities far beyond human limitations.
However, this technological revolution also brings risks and responsibilities. How we steer AI will shape its impact on jobs, inequality, governance and more. Core to this is prompt engineering — the careful design of prompts to align AI systems with human values. As India charges ahead in AI innovation, we must prioritise principled prompt engineering and collaborative governance to ethically co-create an AI-enabled future.
Tip of the Iceberg
Recent systems like ChatGPT demonstrate the staggering progress in language AI. Behind the scenes, advances in model architecture, data, and computing propel performance. However, what users see is merely the tip of the iceberg. These systems have no inherent goals or motivations; their outputs depend entirely on how humans prompt them. A dangerous genie lies in wait inside the machine, ready to serve any master. Will we wield this power responsibly?
Prompt engineering offers a key channel for control. Carefully designed prompts can encourage prosocial behaviour and reduce harm from AI systems. With the right prompts, we can offset biases in training data and steer away from toxic responses. Skilled prompt engineering will become even more crucial as AI capabilities grow more advanced and opaque. It provides a mechanism for human values and ethics to shape AI behaviours rather than blindly amplifying dataset biases.
Upside and Downside
India must contribute its voice and talent to advancing principled prompt engineering globally. We must recognise both the upside and downside risks of language AI. On the one hand, AI promises improved access to education, healthcare, finance, and more. Well-engineered prompts can steer these technologies toward inclusivity, accuracy and transparency. But poorly designed prompts have dire potential: misinformation that divides societies further, authoritarianism enabled by mass surveillance and runaway self-improvement in AI. Prompt engineering techniques are also easily dual use — they can further geopolitical agendas and control citizens.
For India to responsibly harness AI’s benefits, prompt engineering standards and best practices should be developed through inclusive public-private partnerships. Domain experts in social sciences, ethics and law are needed alongside AI researchers and developers. Civil society participation is crucial to surface concerns around data privacy, autonomy and consent. Vulnerable communities in India must help define what AI safety and alignment means in our diverse, socio-economic context. Co-creation with citizens can root AI in shared values of justice, dignity, and non-maleficence.
Public Trust
Transparency is also key. Norms around documentation, auditing and labelling of prompts will build public trust. We must understand how prompts connect training data to model outputs, particularly for high-stakes decisions in banking, justice etc. Explainable AI techniques like LIME should be integrated to interpret model behaviours. Practices like data tagging, documentation and version control common in software engineering can extend to prompt engineering.
Regulating prompt engineering poses challenges. Prompts are simply words — easy to reproduce and difficult to censor at scale. Technical solutions like differential privacy would help, but norms and incentives matter more. Companies and researchers should recognise it is in their self-interest to self-regulate around ethics. The policy should encourage transparency and accountability while allowing room for innovation.
India and AI
India also needs sectoral policies and incentives tailored to local contexts. For example, AI could boost financial inclusion using voice interfaces in multiple Indian languages. But prompts must safeguard against the exclusion or exploitation of vulnerable groups. In healthcare, prompt design should ensure patient privacy while optimising diagnosis for Indian genetic diversity. Our startups should be encouraged to build AI for social impact with prompts engineered for sustainability.
A critical application is AI for digitisation, from optical character recognition (OCR) to translation. Here prompt design must balance accuracy, speed, language diversity and context. Multilingual OCR has huge potential to unlock written knowledge and accelerate e-governance. But it requires extensive training data across Indian languages with prompts carefully tuned by linguists. Translating regional language books and manuscripts could make knowledge more accessible, but prompts must minimise the loss of cultural context. AI’s carbon footprint should be considered, given India’s climate vulnerability.
Through Alliances of Ethics and thoughtful governance, India can responsibly harness AI’s transformational potential. The onus falls on corporates, researchers and governments to prioritise prompt engineering that aligns with constitutional values of justice, liberty, equality and fraternity. With principled AI development, we can co-create a future where technology empowers us rather than subordinates us.
The genie waits in anticipation, ready to serve for good.