1 / 9

Ε˅ÂĿNLP

Streamline and objectively evaluate NLP models with a 360-degree assessment of robustness, bias, and interpretability. Compare models, establish operational boundaries, and gain actionable insights for optimal model selection.

farleyj
Download Presentation

Ε˅ÂĿNLP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ε˅ÂĿNLP an Evaluation Platform for Natural Language Processing

  2. Founder Profiles • SandyaMannarswamy (Sandy) • Ph.D. from IISc (CS) • Strong Patent & Publication profile • 20 years of Enterprise software R&D experience with Hewlett Packard, Microsoft and Xerox Research • Saravanan Chidambaram (Saro) • M.Tech., IIT Kharagpur (CE) • Head of Advanced Development (HPE) • 20 years of R&D and Technology Management with Hewlett Packard Enterprise, Microsoft and Oracle sandyasm@gmail.com sarochida@gmail.com A balanced mix of research and product development experience

  3. Pain Point Model selection for deployment is subjective, leading to costly errors, failures and project delays Subjective Evaluation Lack of streamlined mechanism for customers to evaluate different offering against effective criteria Inability to Compare and Contrast Models Models often perform poorly in real world, tend to be fragile and not being very robust Lack of Quantitative Operational Boundaries NLP/ML Models are crux of the AI driven solutions’ success like Chatbots, Sentiment analysis, Question-Answering. They are being deployed in Customer support, HR, Health-care, Social sciences and E-Commerce.

  4. Our Value Proposition What “Master Health Check-up” to “Human” is similar to what “Ε˅ÂĿNLP” is to “NLP Models” WE assess AND assist YOU for answers to these questions on your Models Is it Robust? Is it Unbiased? Is it Interpretable?

  5. Our solution Data Augmentation On-Premise & On Cloud Actionable Insights • A 360 degree objective assessment of the model with just a few clicks in terms of Robustness, Bias, and Interpretability Objective Evaluation over Diverse set of Criteria Compare offerings for Price to Effectiveness • Provision to compare models, apply criteria you care about, and select the best performing model for your operating scenario Quantify and Establish Operational Boundaries • Quantify operational performance characteristics on a wide range of domain, task and scenario specific data generated by us

  6. Our solution ToDo • An NLP model evaluation tool which can objectively evaluate any NLP model with just a few clicks • Provide you a 360 degree objective assessment of the model in terms of robustness, bias, interpretability etc.. • Quantify operational performance characteristics on a wide range of domain specific task specific scenario specific data generated by us for you.. • We need only a blackbox interface to your NLP model and a small fraction of your test data • Both Cloud based as well as on-premise variants of our tool • We do the Holistic Wellness check-up of your model, and return you an actionable reports on different evaluation criteria • You can compare models, decide which criteria you care about, and select the best performing model for your operating scenario • We can also provide you additional data (generated by us) to improve your model

  7. Our ability to build this venture • We have built mission critical, complex enterprise software for 2 decades (production compilers, platforms & operating system) • We understand how important software quality is and how difficult it is to quantify it • We have a balanced mix of NLP research and production software development experience • We have built enterprise software quality evaluation and performance analysis tools

  8. Help we are seeking from NSRCEL • We would like to validate our product-market fit • Pivot and refine our value proposition • Identify potential customers through networks in NSRCEL • Need NRSCEL help in getting us to connect with corporate/start-ups who are non-NLP/non-AI folks, who are building/buying NLP models without any objective evaluation • Help us to connect to VCs who can fund us

  9. Our Progress so far! • We are currently building a prototype expected to complete the first version by 25th Sep • We have had informal discussions with NLP/data scientists in our network about the utility of our product

More Related