Nishpaksh

Overview

Nishpaksh is an end-to-end framework designed for evaluating, comparing, and reporting the fairness characteristics of machine learning models. The framework computes comprehensive fairness metrics across protected demographic groups and generates structured, audit-ready summaries that align with governance and regulatory compliance requirements in AI systems.

Built to comply with the TEC (Telecommunication Engineering Centre) Standard for Fairness Assessment and Rating of Artificial Intelligence Systems, Nishpaksh currently supports supervised learning models using tabular data. The framework enables systematic documentation of model behavior, identifies potential limitations, and provides clear fairness rationale to support transparent and accountable decision-making in regulated AI deployments. This structured approach helps organizations meet their ethical AI obligations while maintaining high standards of technical rigor.

📗 TEC Standard Reference: Standard for Fairness Assessment and Rating of Artificial Intelligence Systems (TEC)

Future versions of Nishpaksh aim to broaden its capabilities by incorporating bias mitigation guidance, supporting additional data modalities beyond tabular formats, and extending compatibility to a wider range of machine learning model architectures. These enhancements will enable organizations to conduct comprehensive fairness assessments across diverse AI applications and use cases.

Research Paper

The theoretical foundation, design principles, and technical methodology of Nishpaksh are comprehensively documented in the following research paper. This paper provides detailed insights into the framework's architecture, fairness metrics implementation, and compliance mechanisms with TEC standards:

Nishpaksh: TEC Standard-Compliant Framework for Fairness Auditing and Certification of AI Models

Preprint available at: https://doi.org/10.48550/arXiv.2601.16926

Tools & Technologies

Nishpaksh is built using modern Python libraries and frameworks that ensure reliability, scalability, and ease of integration. The core computational engine leverages Scikit-learn for machine learning operations and metric calculations, providing a stable and well-tested foundation for fairness assessments.

The interactive analytics dashboard is developed using Streamlit, offering an intuitive interface for exploring fairness metrics, visualizing model comparisons, and generating reports. This web-based interface makes the framework accessible to both technical and non-technical stakeholders, facilitating collaboration across teams.

Team
Dr. Ranjitha Prasad - Principal Investigator, IIIT Delhi
Avinash Agarwal - Technical Advisor, Department of Telecommunications
Shashank - Research Associate, IIIT Delhi