1 / 17

AI-Governance-Guidelines

Navigate AI regulations with Tsaarou2019s AI Governance Guidelines. Expert consulting on AI compliance, risk management, and ethical AI frameworks.

tsaaro
Download Presentation

AI-Governance-Guidelines

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WHITEPAPER JANUARY - 2025 AI Governance Guidelines Business-Focused Implementation Roadmap All rights reserved by Tsaaro Consulting www.tsaaro.com

  2. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP OVERVIEW India is making significant strides in responsible AI governance with the release of the Report on AI Governance Guidelines Development by the Ministry of Electronics and IT (MeitY). Published for public consultation, this report reflects India’s commitment to creating a robust, inclusive, and adaptive governance framework that aligns with its aspirations for technological advancement. Stakeholders have until 27th January 2025 to provide feedback, ensuring the framework is both participatory and reflective of diverse perspectives. This whitepaper builds on the report’s recommendations, integrating global best practices and regulatory insights to provide businesses with a practical roadmap for responsible AI adoption. It emphasizes governance, risk assessment, and operationalization through structured processes, tools, and metrics. The whitepaper guides organizations in aligning with India's evolving AI regulatory landscape while fostering innovation and trust. 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 1

  3. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP FOREWORD Akarsh Singh A. (CEO & Founder, Tsaaro Consulting) The publication of the Report on AI Governance Guidelines Development signals a pivotal moment for AI governance in India, as the country looks to navigate the complexities of artificial intelligence while fostering innovation and ethical practices. With rapid advancements in AI technologies and increasing concerns about their societal impact, the guidelines aim to establish a framework that balances technological progress with accountability and responsibility. As AI systems become integral to business operations, responsible AI practices are crucial for organizations to mitigate risks and harness the full potential of these technologies. Top 3 Key Insights: Developing clear governancestructures is essential for accountability in AI systems. Effective risk management requires ongoing assessment and mitigation strategies throughout an AI system’s lifecycle. A proactive approach to compliance allows organizations to seize global markets while maintaining ethical and legal standards. By embracing these principles, businesses can position themselves as leaders in responsible AI, contributing to India’s broader vision of an inclusive, secure, and transparent AI ecosystem. 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 2

  4. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP 04 Introduction Understanding Responsible AI Governance 05 Global Landscape: Regulations and Standards 06 07 Current Legal Framework Governing AI Systems in India TABLE OF CONTENTS CONTENTS AI Guidelines Development (“Report”) AI Governance Principles Conditions for Operationalization Gap Analysis Recommendations 08 12 The Business Case of Responsible AI Business-focused Implementation Roadmap for Responsible AI Adoption Key Steps: Governance, Risk Assessment, and Mitigation Operationalizing the Framework: Processes, Tools, and Metrics 13 14 Conclusion 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 3

  5. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP INTRODUCTION This whitepaper delves into the growing importance of responsible AI governance in India, especially in light of the Report on AI Governance Guidelines Development by the Ministry of Electronics and Information Technology (MeitY). As AI technologies continue to evolve and permeate various sectors, the need for a comprehensive and adaptive governance framework becomes increasingly critical. Furthermore, the MeitY has published the report for public consultation to ensure that the governance mechanisms reflect India's aspirations for effective AI regulation. The consultation aims to create an inclusive, and adaptive framework for AI advancements. Stakeholders are encouraged to submit their comments by January 27, 2025, making this an important opportunity for businesses and individuals to contribute to the future of AI governance in India. This whitepaper examines the report, the key principles of responsible AI, and the steps necessary for businesses to operationalize them effectively. It outlines the ethical, legal, and business imperatives for AI compliance and offers practical recommendations for organizations to navigate the complexities of AI governance, mitigate risks, and enhance trust. By providing an implementation roadmap, the whitepaper aims to equip businesses with the tools and knowledge to adopt responsible AI practices, ensuring alignment with both national aspirations and global regulatory standards. 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 4

  6. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP UNDERSTANDING RESPONSIBLE AI GOVERNANCE Meaning and Scope Responsible AI governance refers to the frameworks, policies, and processes that guide the ethical and accountable development, deployment, and management of AI systems. It encompasses principles like fairness, transparency, accountability, and safety, ensuring that AI aligns with societal values, regulatory requirements, and organizational objectives. Effective governance addresses risks, promotes trust, and fosters sustainable AI innovation. Why It Matters: Ethical, Legal, and Business Imperatives 1. Ethical Imperatives: Responsible AI governance ensures AI systems respect human rights, avoid discrimination, and promote inclusivity. Upholding ethical principles is vital for maintaining public trust in AI technologies. 2. Legal Imperatives: As AI regulations evolve globally, robust governance ensures compliance with legal requirements (eg. EU AI Act), minimizing risks of penalties, litigation, and enforcement actions. 3. Business Imperatives: Effective governance mitigates risks, enhances reputational value, and strengthens stakeholder confidence. It enables organizations to leverage AI responsibly for competitive advantage, fostering innovation and long-term success. 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 5

  7. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP GLOBAL LANDSCAPE: REGULATIONS AND STANDARDS The global landscape for AI governance is shaped by evolving regulations, standards, and best practices aimed at ensuring responsible AI development and deployment. This section explores key initiatives and frameworks driving trustworthy AI across jurisdictions worldwide- OECD AI Principles: They promote human-centric AI through five core principles: inclusive growth, human-centered values, transparency, robustness, and accountability. Emphasising their complementary nature, the principles guide responsible stewardship of trustworthy AI. The framework also provides recommendations for nations, including fostering R&D, enabling policies, and enhancing international collaboration to implement these principles effectively. NIST AI Risk Management Framework, 2023: It provides practical guidance for organisations to identify and manage AI-related risks while promoting trustworthy AI development. It outlines characteristics of trustworthy AI systems, including validity, security, accountability, transparency, explainability, privacy enhancement, and fairness. The framework also introduces four key tasks—Govern, Map, Measure, and Manage —to help organisations effectively address AI risks and operationalise these principles flexibly. European Union Artificial Intelligence Act, 2024: The EU AI Act, the world’s first regulatory framework for Artificial Intelligence, establishes harmonised rules to promote trustworthy AI in the EU. It outlines clear obligations for AI developers and deployers and categorizes AI systems into four risk levels: unacceptable, high, limited, and minimal, ensuring oversight based on potential risks associated with AI use. Non- compliance with the EU AI Act will be met with a maximum financial penalty of up to EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher. United States Approach to AI Governance: The United States adopts a decentralised approach to AI regulation, blending federal and state initiatives. The CHIPS and Science Act of 2022 prioritises AI, while a White House Executive Order emphasises transparency and worker protection. Colorado's AI Act, 2024 is the first comprehensive state legislation, addressing algorithmic discrimination and regulating high-risk AI systems in critical sectors like healthcare and employment. These efforts reflect a balance between innovation and risk management. 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 6

  8. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP CURRENT LEGAL FRAMEWORK GOVERNING AI SYSTEMS IN INDIA India's existing legal framework, though not AI-specific, encompasses provisions that can be applied to regulate and address risks associated with AI systems: Information Technology Act, 2000 (IT Act): Addresses cybercrimes related to malicious synthetic media, such as cheating by personation (Section 66D), capturing and transmitting private images without consent (Section 66E), and publishing obscene material (Sections 67A and 67B). It also establishes CERT-IN and NCIIPC to manage cybersecurity threats, mandates incident reporting, and enforces practices such as clock synchronization and maintaining security logs under the CERT-IN Rules and Cybersecurity Directions, 2021. Indian Penal Code, 1860 / Bharatiya Nyaya Sanhita, 2023: Covers offenses like identity theft, forgery, defamation, and circulation of obscene content, ensuring accountability for AI-related harms under criminal law. IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: Mandates intermediaries to prevent and address harmful content, inform users about compliance, and promptly act on complaints, such as removing impersonation or morphed images within 24 hours. Digital Personal Data Protection Act, 2023: The Act applies to fully or partly automated processing of personal data, potentially covering AI-based personal data collection, disclosure, and other forms of processing. This would require data fiduciaries to comply with obligations such as implementing security practices, safeguarding data, and ensuring transparency. Sectoral Cybersecurity Guidelines: Various regulators have introduced specific cybersecurity guidelines for their respective sectors, including the RBI Cybersecurity Framework in Banks 2016, which has set standards for banks, non-banking financial companies, and payment systems; SEBI, which issued Cybersecurity and Cyber Resilience Framework in 2024 and IRDAI, which has guidelines on Information & Cybersecurity 2023. While India's existing legal framework offers substantial provisions to address various risks associated with AI systems, such as cybersecurity, data protection, and criminal accountability, the current laws are not AI-specific. Given the unique challenges and rapid advancements in AI technologies, it is evident that a more targeted regulatory approach is necessary. 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 7

  9. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP AI GOVERNANCE GUIDELINES DEVELOPMENT (“REPORT”) In March 2024, the GoI approved the IndiaAI Mission aiming to build a comprehensive ecosystem that drives AI innovation through strategic programs and public-private collaborations. Recognizing the need for a tailored approach to AI governance, MeitY constituted a multi-stakeholder Advisory Group in November, 2023 to develop an ‘AI for India-Specific Regulatory Framework’. As part of this initiative, a Subcommittee on ‘AI Governance and Guidelines Development’ was formed to analyse critical gaps, examine key issues, and offer actionable recommendations for a comprehensive framework to ensure AI systems in India are trustworthy and accountable. The Report broadly deals with the following: AI Governance Principles Gap Analysis Sub- Committee Report on AI Government Operationalizing Principles Lifecycle Approach Ecosystem View Compliance and Enforcement of exists laws Transparency and Responsibility Leveraging Technology Whole-of-Government Approach Recommendations Specific Actions Policy suggestions AI Governance Principles Several organizations across government, industry, and civil society have outlined principles for responsible and trustworthy AI (RTAI) to guide the development, deployment, and regulation of AI systems. In India, efforts by NITI Aayog and NASSCOM provide a strong foundation, while globally, the OECD AI Principles offer a framework for convergence. A proposed set of AI governance principles builds on these initiatives, aiming to align with both Indian and global standards while focusing on operationalizing them in the Indian context. Transparency: AI systems must provide meaningful information on their development, processes, and limitations, be interpretable, and inform users when interacting with AI. Accountability: Developers and deployers should ensure AI systems respect user rights, comply with laws, and implement mechanisms for clarifying accountability. 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 8

  10. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP KEY PRINCIPLES OF AI GOVERNANCE Digital by Design Governance Transparency Inclusive & Sustainable Innovation Accountability Safety, Reliability & Robustness Human-Centred values Fairness & non- discrimination Privacy & Security Safety, Reliability & Robustness: AI must be safe, reliable, and monitored to mitigate risks, misuse, and unintended outcomes. Regular checks should ensure systems perform as intended. Privacy & Security: AI must comply with data protection laws, respect privacy, ensure data quality, integrity, and adopt security-by-design measures. Fairness & non-discrimination: AI should be inclusive, fair, and free from biases or discrimination, promoting equal opportunities for all. Human-Centric Approach & ‘do no harm’: AI must include human oversight to address ethical dilemmas, prevent undue reliance, and mitigate societal harm. Inclusive & Sustainable Innovation: AI development should equitably distribute benefits and align with sustainable development goals. Digital by Design Governance: AI governance should utilize digital tools to enforce compliance, adopt techno-legal measures, and streamline regulatory processes. Considerations to Operationalise AI Principles The sub-committee highlights three key concepts for operationalising AI governance in India: a lifecycle approach to AI systems, an ecosystem-wide view of AI actors, and leveraging technology for effective governance. 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 9

  11. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP 1. Examining AI Systems Using a Lifecycle Approach A lifecycle approach is key to effectively implementing AI governance principles, as the risks associated with AI systems vary at different stages. These stages include: Development: Examining the design, training, and testing of AI systems. Deployment: Assessing the implementation and operational use of AI systems. Diffusion: Considering the long-term impact of widespread AI adoption across sectors. 2. Taking an Ecosystem-View of AI Actors AI systems involve multiple actors across their lifecycle, such as data principals, providers, developers, deployers, and end-users. Focusing on individual actors limits governance, while an ecosystem approach ensures a holistic view, clarifying responsibilities and liabilities. 3. Leveraging Technology for Governance Traditional governance may fall short given the growth of AI systems. A techno-legal approach, combining legal frameworks with governance technology, can enhance oversight. Tools like "consent artefacts" can assign unique identities to participants, enabling traceability and liability. Technology can assist in tracing unlawful activities, though such tools must be regularly reviewed for security, fairness, and impact on fundamental rights. Gap Analysis In conducting a gap analysis, the sub-committee emphasized that existing laws and regulations still apply to AI systems, with principles like safety, equality, non- discrimination, and privacy grounded in constitutional rights. A review of these laws' suitability in addressing AI-related risks will guide the strengthening of the governance framework. The analysis should focus on areas of existing and emerging concerns, with a cohesive, whole-of-government approach necessary to address the rapidly evolving AI landscape. To govern effectively, regulators will need adequate information from two critical perspectives: 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 10

  12. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP An ecosystem-view is particularly relevant to understand which AI systems are being developed and deployed in India, especially those with high capability, those likely to be widely deployed, or used in sensitive use cases. Recommendations 1. AI Coordination Committee: To form an Inter-Ministerial AI Coordination Committee to coordinate AI governance efforts across sectors, strengthen laws, harmonize initiatives, and promote responsible AI. The Committee will also focus on creating sector-specific datasets to assess fairness and bias in AI models. 2. Technical Secretariat: To establish a Technical Secretariat to bring together expertise, assess AI risks, and develop metrics and frameworks for responsible AI. It will also identify gaps in legislation and state capacity for emerging AI challenges. 3. AI Incident Database: The Technical Secretariat should establish an AI incident database, referencing the OECD AI Incidents Monitor, to track real-world AI issues and guide mitigation efforts. It should encourage voluntary reporting from private entities while ensuring confidentiality and harm mitigation, not fault finding. 4. Transparency Commitments: The Technical Secretariat should collaborate with the industry to promote voluntary transparency commitments, focusing on AI systems’ purposes and regular disclosures like transparency reports and model cards. 5. Technological Measures: The Technical Secretariat should assess the suitability of technological measures, like content provenance tracking and real-time negative outcome monitoring, to address AI risks. It should evaluate standards and mechanisms for improving content tracking across sectors. 6. Sub-Group: A sub-group should be formed to collaborate with MeitY in suggesting specific measures for legislations such as the proposed Digital India Act (DIA) to strengthen the legal, regulatory, and technical frameworks. 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 11

  13. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP THE BUSINESS CASE OF RESPONSIBLE AI Risks of Non-Compliance: Legal, Reputational, and Financial: Legal Risks: Non-compliance may lead to fines, operational restrictions, and litigation tied to privacy, bias, or safety violations. Reputational Risks:AI-related mishaps can erode trust, damage brand value, and spark public backlash, especially in cases of bias or data misuse. Financial Risks: Non-compliance can disrupt business continuity, cause market share losses, and limit access to regions with stringent AI standards. Risks of AI Non-Compliance High Impact Decline in sales due to non- compliance Hefty fines and sanctions Financial Risks Legal Risks Increased compliance costs Minor legal complications Low Impact Benefits of Proactive Compliance: Trust and Market Competitiveness Enhances Trust and Credibility: Builds confidence among customers, regulators, and investors through ethical AI practices. Drives Innovation and Market Access: Encourages responsible AI applications while ensuring compliance with global standards for broader market reach. Strengthens Competitive Advantage: Attracts talent, investment, and partnerships, positioning businesses as leaders in the AI-driven economy. Ethical Framework Trust Innovation Customer Loyalty Differentiated Offerings 01 www.tsaaro.com Market Competitiveness All rights reserved by Tsaaro Consulting Page No. | 12

  14. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP BUSINESS-FOCUSED IMPLEMENTATION ROADMAP FOR RESPONSIBLE AI ADOPTION Key Steps: Governance, Risk Assessment, and Mitigation Implementing Responsible AI starts with strong governance, thorough risk assessments, and effective mitigation to align AI systems with ethics, laws, and organizational goals- Conduct Comprehensive Risk Assessments Establish Governance Structures Responsible AI Governance From AI Committee Map AI Systems Define Accountability Use Risks Taxonomy Develop AI Policy Assess Compliance Mitigate Risks proactively Implement Technical Safeguards Develop Incident Response Plans Train Employee Establish Governance Structures: Form a cross-functional Responsible AI committee involving stakeholders from legal, compliance, IT, HR, and business units. Define clear accountability for AI development, deployment, and monitoring & develop an AI policy aligned with global regulations Conduct Comprehensive Risk Assessments: Map AI systems to identify use cases, stakeholders, and associated risks (e.g., bias, privacy, or safety concerns). Use a risk taxonomy to categorize risks by impact and likelihood, focusing on high- risk applications. Mitigate Risks Proactively: Implement technical safeguards like fairness testing, bias detection, and explainability tools. Develop incident response plans for AI-related issues to mitigate harm and ensure rapid recovery. Operationalizing the Framework: Processes, Tools, and Metrics Turning Responsible AI principles into action requires clear processes, effective tools, and measurable metrics to ensure consistent implementation and accountability across the organization. 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 13

  15. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP Implementation Fairness Evaluation Platforms AI Lifecycle Management Processes Tools Privacy- Enhancing Technologies KPIs for Fairness Metrics 1 Processes: AI Lifecycle Management: Incorporate responsible AI principles at every stage— from ideation to decommissioning. Audit and Review Mechanisms: Establish periodic reviews of AI systems for compliance and performance. Vendor and Partner Oversight: Extend responsible AI practices to third-party collaborations. Tools: 1. Adopt tools like fairness evaluation platforms, explainability frameworks (e.g., SHAP, LIME), and privacy-enhancing technologies. Use risk management frameworks like NIST’s AI Risk Management Framework to guide implementation. Metrics: 2. Define key performance indicators (KPIs) for fairness, transparency, and accountability (e.g., demographic parity for fairness or explanation accuracy for transparency). Track regulatory compliance, risk mitigation outcomes, and stakeholder satisfaction. CONCLUSION As India continues to refine its AI governance framework, businesses must stay proactive in adapting to emerging regulations. The responsible implementation of AI, backed by robust governance, risk assessment, and compliance measures, will be critical for organizations to not only meet legal requirements but also foster trust and competitiveness in the market. By embracing these principles, companies can ensure long-term success, mitigate potential risks, and lead the way in shaping the future of AI in India. With the implementation roadmap provided in this whitepaper, businesses can take actionable steps to ensure responsible AI adoption, building a solid foundation for sustainable and ethical AI practices. 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 14

  16. WHITEPAPER | AI GOVERNANCE GUIDELINES: BUSINESS-FOCUSED IMPLEMENTATION ROADMAP Key Contributors: Akarsh Singh A(CEO & Founder,Tsaaro Consulting) Contact: +91 7543898066, akarsh@tsaaro.com Krishna Srivastava(Co-Founder & Director, Tsaaro Consulting) Contact: +91 7760923421, krishna@tsaaro.com Bhaskara Nand Shukla(Director, Tsaaro Consulting) Contact: +91 9119999054, bhaskara@tsaaro.com Arohi Pathak(Senior Data Protection Consultant, Tsaaro Consulting) Mahima Sharma (Senior Data Protection Consultant, Tsaaro Consulting) Zoya Shabbir (Data Protection Consultant, Tsaaro Consulting) References: 1. The Ministry of Electronics and Information Technology (MeitY), Report on AI Governance Guidelines Development (January, 2025). The Organisation for Economic Co-operation and Development (OECD), AI Principles- Recommendation of the Council on Artificial Intelligence, 2019. The National Institute of Standards and Technolgy (NIST), Artificial Intelligence Risk Management Framework, 2023. The European Union Artificial Intelligence Act, 2024. The Digital Personal Data Protection Act, 2023. Information Technology Act, 2000 (IT Act). Bharatiya Nyaya Sanhita (BNS), 2023. IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. 2. 3. 4. 5. 6. 7. 8. 01 www.tsaaro.com All rights reserved by Tsaaro Consulting Page No. | 15

  17. OUR SERVICES SYSTEM INTEGRATORS DATA PRIVACY GRC We are the resellers of the Regulatory Gap Assessment (GDPR, Industry Standards Implementation mentioned tools & help with DPDPA, CCPA/ CPRA and 50 others) (ISO 27001:2022, NIST CSF, NIST SP efficient implementation of the 800-53, SOC 2 Type 2, HIPAA) Privacy Program Implementation same via expert staff augmentation. GRC Platform Implementation Privacy by Design Assessment Cyber Risk Quantification (FAIR Privacy Automation Platform Assessment) Implementation Securiti.ai GoTrust Cyber Maturity Assessment Industry Standards Implementation OneTrust Skyflow Cloud Security Assessment (ISO (ISO 27701:2019, NIST PMF, AICPA BigID Privado 27017/18) PMM, SOC 2 Type 2 Privacy) Exterro Scrut Data Governance DPO as a Service Secuvy.ai Automation Third-Party Risk Management EU-Rep as a Service Vanta Consent Management Privacy Risk Assessment and Remediation AI COMPLIANCE TECHNICAL SECURITY x EU AI Act Compliance Vulnerability Assessment & Ethical Impact Assessment Penetration Testing Red/ Purple/ Blue Teaming Threat Intelligence MDR/ XDR/ MSSP services Website www.tsaaro.com Email info@tsaaro.com OFFICE ADDRESS AMSTERDAM Regus Schiphol Rijk, Beech Avenue 54-62, Het Poortgebouw, 1119 PW, Amsterdam, Netherlands. Phone: +31-686053719 DELHI NCR ATS Bouquet, Tower C, Office No. 302, Sector - 132, Noida, Uttar Pradesh - 201304, India. Phone: +91 9557722103 BENGALURU Manyata Embassy Business Park, Ground Floor, E1 Block, Beech Building, Outer Ring Road, Bengaluru - 560045, India. Phone: +91 9557722103 MUMBAI Supreme Business Park, Unit No. B-501, 5th Floor, Wing ‘B’, Powai, Mumbai, Maharashtra, 400076, India. Phone: +91 9557722103 PUNE Tech Centre, 5th Floor Rajiv Gandhi Infotech Park, MIDC, Hinjewadi Pune, Maharashtra, 411057, India. Phone: +91 9557722103 All rights reserved by Tsaaro Consulting

More Related