InfoBlox / UltraDNS Integration
Would be great to be able to query InfoBlox / UltraDNS data through an API integration. InfoBlox would require an internal end-point that would be able to obtain the data without exposing it externally.
Feature
16 days ago
InfoBlox / UltraDNS Integration
Would be great to be able to query InfoBlox / UltraDNS data through an API integration. InfoBlox would require an internal end-point that would be able to obtain the data without exposing it externally.
Feature
16 days ago
Internal API Integration
It would be fantastic if we are able to deploy a network appliance inside our secure boundary and then select it for performing connection requests to internal only tools via API instead of having to expose them to the internet.
Feature
16 days ago
Internal API Integration
It would be fantastic if we are able to deploy a network appliance inside our secure boundary and then select it for performing connection requests to internal only tools via API instead of having to expose them to the internet.
Feature
16 days ago
Providing auth tokens, refresh tokens, and refresh endpoints to Guard
Description: I’m on an engagement where you have to sign in with Okta oauth to access the aplication, in addition to having to use some form of MFA (which in this case is my actual physical ubikey, making it basically impossible for Guard to replicate). Expected/Desired Behavior: For web application assets, there could be a way to provide auth & refresh tokens, in addition to the refresh endpoint (or perhaps a request to the refresh endpoint which guard can parse).
Feature
28 days ago
Low Priority
Providing auth tokens, refresh tokens, and refresh endpoints to Guard
Description: I’m on an engagement where you have to sign in with Okta oauth to access the aplication, in addition to having to use some form of MFA (which in this case is my actual physical ubikey, making it basically impossible for Guard to replicate). Expected/Desired Behavior: For web application assets, there could be a way to provide auth & refresh tokens, in addition to the refresh endpoint (or perhaps a request to the refresh endpoint which guard can parse).
Feature
28 days ago
Low Priority
In Progress
Security Scorecard Grade
Refined Risk Score Algorithm (Research-Backed) Core principle: Start at 100, deduct for demonstrated risk, with logarithmic normalization and anti-gaming protections. Step 1: Per-Risk Deduction Each demonstrated risk contributes: deduction(risk) = baseweight(severity) × agemultiplier(days_open) Base weights (exponential spacing, ~2x per level): Age multiplier (sigmoid curve, capped at 3x, steepest around SLA boundary): sla_days = {critical: 7, high: 30, medium: 60, low: 90} agemultiplier = 1.0 + 2.0 × sigmoid((daysopen - sladays) / sladays) where sigmoid(x) = 1 / (1 + exp(-4x)) This means: - Within SLA → multiplier ~1.0-1.5x (minor urgency signal) - At SLA boundary → multiplier ~2.0x (clear penalty ramp) - 2x past SLA → multiplier ~2.8x (approaching cap) - Cap at 3.0x → never exceeds 3x regardless of age Logarithmic Aggregation Sum deductions but compress via log to prevent runaway scores: raw_deductions = sum(deduction(risk) for each demonstrated risk) compresseddeductions = 100 × log(1 + rawdeductions) / log(1 + max(rawdeductions, assetscale)) assetscale = max(totalassets × 0.5, 10) This means: - A customer with 1,000 assets can tolerate more total raw deduction before their grade drops, but logarithmically — not linearly - The max(…, 10) floor prevents tiny asset counts from wildly inflating scores - The 0.5 factor means roughly 1 medium risk per 2 assets = score of ~50 (grade D) Confidence Adjustment (Wilson Score) For customers with very few assets, apply a confidence penalty: confidence = (assetcount + 1) / (assetcount + 10) - 3 assets → confidence = 0.31 → score pulled toward 70 (grade C) - 10 assets → confidence = 0.55 → moderate pull - 50 assets → confidence = 0.85 → minimal adjustment - 100+ assets → confidence = 0.95+ → negligible adjustedscore = confidence × computedscore + (1 - confidence) × 70 This ensures customers with tiny asset counts don't get misleading A grades simply because we haven't found anything yet. Final Grade riskscore = max(0, 100 - compresseddeductions) finalscore = confidence × riskscore + (1 - confidence) × 70 Grade mapping: A: 90-100 (Excellent — minimal demonstrated risk) B: 80-89 (Good — some risk, within SLAs) C: 70-79 (Fair — moderate risk or aging findings) D: 60-69 (Poor — significant demonstrated risk) F: <60  (Critical — immediate attention required)
Feature
28 days ago
In Progress
Security Scorecard Grade
Refined Risk Score Algorithm (Research-Backed) Core principle: Start at 100, deduct for demonstrated risk, with logarithmic normalization and anti-gaming protections. Step 1: Per-Risk Deduction Each demonstrated risk contributes: deduction(risk) = baseweight(severity) × agemultiplier(days_open) Base weights (exponential spacing, ~2x per level): Age multiplier (sigmoid curve, capped at 3x, steepest around SLA boundary): sla_days = {critical: 7, high: 30, medium: 60, low: 90} agemultiplier = 1.0 + 2.0 × sigmoid((daysopen - sladays) / sladays) where sigmoid(x) = 1 / (1 + exp(-4x)) This means: - Within SLA → multiplier ~1.0-1.5x (minor urgency signal) - At SLA boundary → multiplier ~2.0x (clear penalty ramp) - 2x past SLA → multiplier ~2.8x (approaching cap) - Cap at 3.0x → never exceeds 3x regardless of age Logarithmic Aggregation Sum deductions but compress via log to prevent runaway scores: raw_deductions = sum(deduction(risk) for each demonstrated risk) compresseddeductions = 100 × log(1 + rawdeductions) / log(1 + max(rawdeductions, assetscale)) assetscale = max(totalassets × 0.5, 10) This means: - A customer with 1,000 assets can tolerate more total raw deduction before their grade drops, but logarithmically — not linearly - The max(…, 10) floor prevents tiny asset counts from wildly inflating scores - The 0.5 factor means roughly 1 medium risk per 2 assets = score of ~50 (grade D) Confidence Adjustment (Wilson Score) For customers with very few assets, apply a confidence penalty: confidence = (assetcount + 1) / (assetcount + 10) - 3 assets → confidence = 0.31 → score pulled toward 70 (grade C) - 10 assets → confidence = 0.55 → moderate pull - 50 assets → confidence = 0.85 → minimal adjustment - 100+ assets → confidence = 0.95+ → negligible adjustedscore = confidence × computedscore + (1 - confidence) × 70 This ensures customers with tiny asset counts don't get misleading A grades simply because we haven't found anything yet. Final Grade riskscore = max(0, 100 - compresseddeductions) finalscore = confidence × riskscore + (1 - confidence) × 70 Grade mapping: A: 90-100 (Excellent — minimal demonstrated risk) B: 80-89 (Good — some risk, within SLAs) C: 70-79 (Fair — moderate risk or aging findings) D: 60-69 (Poor — significant demonstrated risk) F: <60  (Critical — immediate attention required)
Feature
28 days ago
Completed
Snyk Code
Summary Integrate Snyk Code (SAST) findings into Chariot via the Snyk REST API. Snyk Code uses the DeepCode AI engine for semantic code analysis (data flow, taint tracking, control flow, type inference) across 15+ languages. Background Snyk Code is an AI-powered SAST tool. Integration is polling-based (webhooks do NOT support Snyk Code events). The Chariot backend already has 20+ integration patterns (Qualys, Wiz, GitHub, BurpSuite Enterprise) via BaseCapability + init() + Send() that map cleanly to Snyk's data model. Technical Details API Base URL: https://api.snyk.io/rest (4 regional variants) Versioning: Date-based (recommended: 2024-10-15) Format: JSON:API compliant (Content-Type: application/vnd.api+json) Rate Limit: 1,620 req/min per API key (HTTP 429 on exceed) Auth: Service Account token recommended (Authorization: token {TOKEN}). Enterprise plan required. Key Endpoints | Method | Path | Purpose | | -- | -- | -- | | GET | /orgs/{org_id}/issues?type=code | List Snyk Code findings with source locations | | GET | /orgs/{org_id}/issues/{issue_id} | Get specific issue detail | | GET | /orgs/{org_id}/projects | List projects (find Code Analysis projects) | | GET | /orgs/{org_id}/settings/sast | Check Snyk Code enablement | | GET | /orgs | List accessible organizations | Issue Response Fields effective_severity_level: "high", "medium", "low" (no Critical) type: "code" for Snyk Code findings severities: Array with type, source, level, score, vector scan_item: Relationship to project/target ignored: Boolean Severity Mapping | Snyk Code | Chariot Risk Status | | -- | -- | | High | H | | Medium | M | | Low | L | Snyk Code uses only 3 severity levels (no Critical). Priority Score (0-1000) stored as attribute for supplemental ranking. Data Mapping | Snyk Code Field | Chariot Entity | Chariot Field/Method | | -- | -- | -- | | Issue ID | Risk | Key (dedup identifier) | | Severity (H/M/L) | Risk | Status severity character | | CWE ID | Risk Attribute | risk.Attribute("cwe", cweId) | | Priority Score | Risk Attribute | risk.Attribute("priority_score", score) | | File Path | Risk Attribute | risk.Attribute("file_path", path) | | Line Number | Risk Attribute | risk.Attribute("line_number", line) | | Description | Risk Definition | Definition file | | Data Flow | Risk Proof | Proof file with source-to-sink flow | | Fix Analysis | Risk Definition | Recommendation section | | Repository | Asset | Code repository as Asset | Implementation Plan Add SnykCredential to tabularium - New credential type in /modules/tabularium/pkg/model/model/credential.go storing: API token, org ID, and regional base URL Create integration handler at /modules/chariot/backend/pkg/tasks/integrations/snyk/ using BaseCapability pattern Implement polling via GET /orgs/{org_id}/issues?type=code with cursor-based pagination Map findings to Chariot Risks with severity, CWE, file path, and line number attributes Store data flow as Proof files - Snyk's source-to-sink taint flow attached to Risk entities Implement exponential backoff for rate limits (1,620 req/min) with errgroup Medium concurrency (30) Reference Integrations Qualys (qualys.go): XML API, pagination, full vulnerability mapping with CVSS/CVE attributes Wiz (wiz.go): Cloud security with CheckAffiliation BurpSuite Enterprise (burp-enterprise.go): Web app scanning pattern Permissions Required | Permission | Purpose | | -- | -- | | Org Collaborator (minimum) | Read issues, projects, settings | | Org Admin | Full CRUD, manage integrations | Key Considerations Snyk Code must be enabled at the organization level before API access works Webhooks do NOT support Snyk Code events - polling is required SARIF format available but REST API JSON is more direct for Chariot mapping Enterprise plan required for full API/Service Account access Validate with live Snyk account before implementation to confirm response schema Research Full research available at.claude/.output/research/2026-03-06-111722-snyk-code-integration/ References Snyk REST API Docs Snyk Issues API Snyk Code Docs Snyk Auth Docs snyk/code-client-go - Official Go SAST client cloudquery/snyk-client-go - Auto-generated REST client
Feature
about 1 month ago
Completed
Snyk Code
Summary Integrate Snyk Code (SAST) findings into Chariot via the Snyk REST API. Snyk Code uses the DeepCode AI engine for semantic code analysis (data flow, taint tracking, control flow, type inference) across 15+ languages. Background Snyk Code is an AI-powered SAST tool. Integration is polling-based (webhooks do NOT support Snyk Code events). The Chariot backend already has 20+ integration patterns (Qualys, Wiz, GitHub, BurpSuite Enterprise) via BaseCapability + init() + Send() that map cleanly to Snyk's data model. Technical Details API Base URL: https://api.snyk.io/rest (4 regional variants) Versioning: Date-based (recommended: 2024-10-15) Format: JSON:API compliant (Content-Type: application/vnd.api+json) Rate Limit: 1,620 req/min per API key (HTTP 429 on exceed) Auth: Service Account token recommended (Authorization: token {TOKEN}). Enterprise plan required. Key Endpoints | Method | Path | Purpose | | -- | -- | -- | | GET | /orgs/{org_id}/issues?type=code | List Snyk Code findings with source locations | | GET | /orgs/{org_id}/issues/{issue_id} | Get specific issue detail | | GET | /orgs/{org_id}/projects | List projects (find Code Analysis projects) | | GET | /orgs/{org_id}/settings/sast | Check Snyk Code enablement | | GET | /orgs | List accessible organizations | Issue Response Fields effective_severity_level: "high", "medium", "low" (no Critical) type: "code" for Snyk Code findings severities: Array with type, source, level, score, vector scan_item: Relationship to project/target ignored: Boolean Severity Mapping | Snyk Code | Chariot Risk Status | | -- | -- | | High | H | | Medium | M | | Low | L | Snyk Code uses only 3 severity levels (no Critical). Priority Score (0-1000) stored as attribute for supplemental ranking. Data Mapping | Snyk Code Field | Chariot Entity | Chariot Field/Method | | -- | -- | -- | | Issue ID | Risk | Key (dedup identifier) | | Severity (H/M/L) | Risk | Status severity character | | CWE ID | Risk Attribute | risk.Attribute("cwe", cweId) | | Priority Score | Risk Attribute | risk.Attribute("priority_score", score) | | File Path | Risk Attribute | risk.Attribute("file_path", path) | | Line Number | Risk Attribute | risk.Attribute("line_number", line) | | Description | Risk Definition | Definition file | | Data Flow | Risk Proof | Proof file with source-to-sink flow | | Fix Analysis | Risk Definition | Recommendation section | | Repository | Asset | Code repository as Asset | Implementation Plan Add SnykCredential to tabularium - New credential type in /modules/tabularium/pkg/model/model/credential.go storing: API token, org ID, and regional base URL Create integration handler at /modules/chariot/backend/pkg/tasks/integrations/snyk/ using BaseCapability pattern Implement polling via GET /orgs/{org_id}/issues?type=code with cursor-based pagination Map findings to Chariot Risks with severity, CWE, file path, and line number attributes Store data flow as Proof files - Snyk's source-to-sink taint flow attached to Risk entities Implement exponential backoff for rate limits (1,620 req/min) with errgroup Medium concurrency (30) Reference Integrations Qualys (qualys.go): XML API, pagination, full vulnerability mapping with CVSS/CVE attributes Wiz (wiz.go): Cloud security with CheckAffiliation BurpSuite Enterprise (burp-enterprise.go): Web app scanning pattern Permissions Required | Permission | Purpose | | -- | -- | | Org Collaborator (minimum) | Read issues, projects, settings | | Org Admin | Full CRUD, manage integrations | Key Considerations Snyk Code must be enabled at the organization level before API access works Webhooks do NOT support Snyk Code events - polling is required SARIF format available but REST API JSON is more direct for Chariot mapping Enterprise plan required for full API/Service Account access Validate with live Snyk account before implementation to confirm response schema Research Full research available at.claude/.output/research/2026-03-06-111722-snyk-code-integration/ References Snyk REST API Docs Snyk Issues API Snyk Code Docs Snyk Auth Docs snyk/code-client-go - Official Go SAST client cloudquery/snyk-client-go - Auto-generated REST client
Feature
about 1 month ago
Completed
Run cross-cloud dangling DNS check at scale
Split out from ENG-400. We should expand the affiliation agent to proactively scan a customer's environment for dangling DNS records, rather than running on demand, using all of the heuristics of the affiliation agent (i.e., triggering runs against cross-cloud IPs, certificate mismatches, abnormal website content, etc.).
Feature
about 1 month ago
Completed
Run cross-cloud dangling DNS check at scale
Split out from ENG-400. We should expand the affiliation agent to proactively scan a customer's environment for dangling DNS records, rather than running on demand, using all of the heuristics of the affiliation agent (i.e., triggering runs against cross-cloud IPs, certificate mismatches, abnormal website content, etc.).
Feature
about 1 month ago
Completed
Update affiliation agent for cross-cloud dangling DNS
Split out from ENG-400. Track updates to the asset affiliation agent to support the cross-cloud dangling DNS record check (DNS resolves to a cloud IP that is not present in the customer’s assets for that provider). Includes deciding behavior when the resolved IP belongs to a cloud provider with no customer integration (Unverified / Skip / Dangling regardless).
Feature
about 1 month ago
Completed
Update affiliation agent for cross-cloud dangling DNS
Split out from ENG-400. Track updates to the asset affiliation agent to support the cross-cloud dangling DNS record check (DNS resolves to a cloud IP that is not present in the customer’s assets for that provider). Includes deciding behavior when the resolved IP belongs to a cloud provider with no customer integration (Unverified / Skip / Dangling regardless).
Feature
about 1 month ago
Completed
Crowdstrike integration Master SID support
Summary Add Master SID-level integration support for CrowdStrike Falcon, allowing customers to integrate at a higher organizational level so that fewer individual integrations are needed, reducing customer friction. Background We currently have a CrowdStrike Falcon integration, but it operates at the individual SID level. Customers with multiple SIDs must configure each one separately, which creates unnecessary friction. Integrating at the Master SID level will allow a single integration to cover all child SIDs, significantly simplifying the setup process for customers managing multiple environments. Scope Add Master SID authentication support to the CrowdStrike Falcon integration Enable automatic discovery/coverage of child SIDs under a Master SID Maintain backward compatibility with existing individual SID integrations Acceptance Criteria Customers can authenticate using a Master SID API key All child SIDs under the Master SID are accessible through the single integration Existing individual SID integrations continue to work without changes Tests passing / Documentation updated References PR: (If applicable) Related: Existing CrowdStrike Falcon integration
Feature
about 2 months ago
Completed
Crowdstrike integration Master SID support
Summary Add Master SID-level integration support for CrowdStrike Falcon, allowing customers to integrate at a higher organizational level so that fewer individual integrations are needed, reducing customer friction. Background We currently have a CrowdStrike Falcon integration, but it operates at the individual SID level. Customers with multiple SIDs must configure each one separately, which creates unnecessary friction. Integrating at the Master SID level will allow a single integration to cover all child SIDs, significantly simplifying the setup process for customers managing multiple environments. Scope Add Master SID authentication support to the CrowdStrike Falcon integration Enable automatic discovery/coverage of child SIDs under a Master SID Maintain backward compatibility with existing individual SID integrations Acceptance Criteria Customers can authenticate using a Master SID API key All child SIDs under the Master SID are accessible through the single integration Existing individual SID integrations continue to work without changes Tests passing / Documentation updated References PR: (If applicable) Related: Existing CrowdStrike Falcon integration
Feature
about 2 months ago