Responsible AI

Vendor Name Yottasecure, Inc.
System Name Yottasecure Real-time AI Vulnerability Intelligence Platform
Overview Yottasecure’s AI-powered platform helps organizations identify, prioritize, and remediate software vulnerabilities. It combines autonomous scanning, ML-based exploit likelihood scoring, and an LLM-powered co-pilot to identity vulnerabilities, accelerate patching and compliance for enterprises in critical sectors.
Purpose The platform performs:

  1. Vulnerability scanning and analysis
  2. Exploit likelihood scoring using machine learning
  3. Conversational guidance for remediation via an AI co-pilot
  4. Compliance tracking for SOC 2 and HIPAA
  5. Penetration testing and audit support All features are configurable. Users can enable/disable co-pilot, adjust scan depth, set compliance frameworks, and integrate with CI/CD pipelines or ticketing systems.
Intended Domain Cybersecurity for Legal, healthcare, infrastructure, and public sector organizations.
Training Data ML models are trained on public CVE/KEV datasets, exploit databases, vulnerability reports, and real-world incident data. All data sources are publicly available or licensed. LLM components utilize OpenAI APIs and are fine-tuned via supervised learning. Data updates occur monthly.
Test Data System performance is tested on historical vulnerability, exploit datasets and anonymized customer scans under varying conditions. Benchmarking includes false positive rates, prioritization accuracy, and remediation times.
Model Information The platform uses:

  • Transformer-based LLMs (via OpenAI API)
  • Supervised ML for exploit prediction
  • Rule-based logic for compliance modules
  • Rasa-based NLP pipeline for chatbot interface
Update procedure Models are updated quarterly or upon significant threat intelligence developments. Users are notified and have access to changelogs. Critical updates are automatically applied; others can be toggled in admin settings.
Inputs and Outputs Inputs: Docker images, SBOMs, scan results, package metadata, user queries Outputs: Prioritized vulnerability list, natural language remediation, compliance dashboards, AI chat responses Integrations: Trivy, Qualys, Nessus, Jira, GitHub, Slack, ServiceNow
Performance Metrics Metrics include precision/recall of exploitability scoring, mean time to patch (MTTP), co-pilot usage rate, and customer-reported resolution confidence. Dashboards monitor these in real time.
Bias Bias is managed through continuous audit of training data sources, model fairness tests, and user feedback loops. The co-pilot avoids automated enforcement and retains human-in-the-loop for decisions.
Robustness The system flags low confidence outputs and handles outliers by deferring to human review. Overwritten or corrected decisions are used to fine-tune future model updates.
Optimal Conditions Performs best with well-structured scan data, clear SBOMs, and known frameworks (e.g., SOC 2, HIPAA). Works well with environments of 100+ assets.
Poor Conditions Less effective with noisy or sparse data, missing metadata. Hallucinations may occur in co-pilot under ambiguous input.
Explanation The system provides natural language explanations for prioritization, decision reasoning in dashboards, and remediation logic in the co-pilot. Outputs are reviewable and auditable.
Jurisdiction Considerations The system aligns with U.S. cybersecurity frameworks (NIST, CISA KEV) and supports compliance for SOC 2, HIPAA, and state-level privacy laws.
Data Protection Compliant with NIST 800-53, NIST AI RMF. In progress for SOC 2 Type I. Supports HIPAA-safe handling for PHI and anonymized processing for scan data.

 

Impact Assessment Questionnaire

How is the AI tool monitored to identify any problems in usage? Can outputs (recommendations, predictions, etc.) be overwritten by a human, and do overwritten outputs help calibrate the system in the future? Outputs are monitored via dashboards, and users can overwrite AI recommendations. Overwrites are logged and used for model tuning.

 

How is bias managed effectively?

 

Bias is managed through curated training data, fairness checks, and continuous validation. Users can flag issues, and audit trails are maintained.
Have the vendors or an independent party conducted a study on the bias, accuracy, or disparate impact of the system? If yes, can the agency review the study? Include methodology and results. Initial internal audits have been conducted; a third-party bias/impact study is planned. Methodologies include stratified testing across industries and exploit datasets.

 

How can the agency and its partners flag issues related to bias, discrimination, or poor performance of the AI system? Users can flag issues through the UI, email, or API. All flagged items are reviewed and optionally added to model refinement sets.
How has the Human-Computer Interaction aspect of the AI tool been made accessible, such as to people with disabilities? The co-pilot interface supports keyboard navigation, screen readers, and WCAG-compliant layouts. Usability testing with security analysts and accessibility standards are part of QA.
Please share any relevant information, links, or resources regarding your organization’s responsible AI strategy. Yottasecure follows a Responsible AI framework aligned with NIST AI RMF and includes governance over training data, transparency in outputs, and oversight of LLM behavior.