| Vendor Name |
Yottasecure, Inc. |
| System Name |
Yottasecure Real-time AI Vulnerability Intelligence Platform |
| Overview |
Yottasecure’s AI-powered platform helps organizations identify, prioritize, and remediate software vulnerabilities. It combines autonomous scanning, ML-based exploit likelihood scoring, and an LLM-powered co-pilot to identity vulnerabilities, accelerate patching and compliance for enterprises in critical sectors. |
| Purpose |
The platform performs:
- Vulnerability scanning and analysis
- Exploit likelihood scoring using machine learning
- Conversational guidance for remediation via an AI co-pilot
- Compliance tracking for SOC 2 and HIPAA
- Penetration testing and audit support All features are configurable. Users can enable/disable co-pilot, adjust scan depth, set compliance frameworks, and integrate with CI/CD pipelines or ticketing systems.
|
| Intended Domain |
Cybersecurity for Legal, healthcare, infrastructure, and public sector organizations. |
| Training Data |
ML models are trained on public CVE/KEV datasets, exploit databases, vulnerability reports, and real-world incident data. All data sources are publicly available or licensed. LLM components utilize OpenAI APIs and are fine-tuned via supervised learning. Data updates occur monthly. |
| Test Data |
System performance is tested on historical vulnerability, exploit datasets and anonymized customer scans under varying conditions. Benchmarking includes false positive rates, prioritization accuracy, and remediation times. |
| Model Information |
The platform uses:
- Transformer-based LLMs (via OpenAI API)
- Supervised ML for exploit prediction
- Rule-based logic for compliance modules
- Rasa-based NLP pipeline for chatbot interface
|
| Update procedure |
Models are updated quarterly or upon significant threat intelligence developments. Users are notified and have access to changelogs. Critical updates are automatically applied; others can be toggled in admin settings. |
| Inputs and Outputs |
Inputs: Docker images, SBOMs, scan results, package metadata, user queries Outputs: Prioritized vulnerability list, natural language remediation, compliance dashboards, AI chat responses Integrations: Trivy, Qualys, Nessus, Jira, GitHub, Slack, ServiceNow |
| Performance Metrics |
Metrics include precision/recall of exploitability scoring, mean time to patch (MTTP), co-pilot usage rate, and customer-reported resolution confidence. Dashboards monitor these in real time. |
| Bias |
Bias is managed through continuous audit of training data sources, model fairness tests, and user feedback loops. The co-pilot avoids automated enforcement and retains human-in-the-loop for decisions. |
| Robustness |
The system flags low confidence outputs and handles outliers by deferring to human review. Overwritten or corrected decisions are used to fine-tune future model updates. |
| Optimal Conditions |
Performs best with well-structured scan data, clear SBOMs, and known frameworks (e.g., SOC 2, HIPAA). Works well with environments of 100+ assets. |
| Poor Conditions |
Less effective with noisy or sparse data, missing metadata. Hallucinations may occur in co-pilot under ambiguous input. |
| Explanation |
The system provides natural language explanations for prioritization, decision reasoning in dashboards, and remediation logic in the co-pilot. Outputs are reviewable and auditable. |
| Jurisdiction Considerations |
The system aligns with U.S. cybersecurity frameworks (NIST, CISA KEV) and supports compliance for SOC 2, HIPAA, and state-level privacy laws. |
| Data Protection |
Compliant with NIST 800-53, NIST AI RMF. In progress for SOC 2 Type I. Supports HIPAA-safe handling for PHI and anonymized processing for scan data. |