We are seeking a highly skilled and experienced Product Security Architect (Engineer 4) to design, develop, and scale an AI-driven security automation engine that will integrate directly into Keysight’s Software Development Life Cycle (SDLC). This role will lead the development of an intelligent system capable of:
Understanding the full Software Development Life Cycle (SDLC) and shifting left automation of remediation tasks earlier in the SDLCConsuming and interpreting scan data from our vulnerability assessment platformAutomatically generating exploit and testing scripts using AIValidating vulnerability existence with high confidence and reproducibility earlier in the SDLCFeeding verified results back into the UI with reliability, auditability, and traceabilityProviding AI-assisted code patching, secure coding recommendations, and CI/CD automation to accelerate mitigation workflows earlier in the SDLC
You will work closely with product owners, security researchers, and full stack engineers to shape Keysight’s next generation Software Development Life Cycle (SDLC) as we shift left and integrate autonomous security tooling. You will also partner with the Product Security and Compliance teams to ensure that the new AI automation capabilities are seamlessly incorporated into Keysight’s Software Development Life Cycle (SDLC), strengthening validation accuracy, reducing friction, improving release cadence and resulting in faster release cycles across Keysight products.
About the Team:You will collaborate with product owners, security researchers, and full stack engineers to build advanced autonomous security capabilities that enhance Keysight’s Software Development Life Cycle (SDLC). Key Responsibilities:Software Development Life Cycle (SDLC) IntegrationSeamlessly integrate security automation into Keysight’s Software Development Life Cycle (SDLC)Enable ingestion, normalization, and correlation across Black Duck SCA, Nessus Vulnerability Management, Burp Suite DAST, SAST tools, and product telemetryAI-Driven Vulnerability Verification & AutomationDesign and implement AI models that interpret vulnerability scan data and autonomously generate exploit scriptsBuild automated validation pipelines using sandboxed and orchestrated environmentsDevelop classification and scoring logic with strong confidence metricsImplement guardrails, safety classifiers, and auditing for safe LLM operationsFalse Positive Reduction & Evidence QualityBuild multi‑agent reasoning workflows to reduce false positivesProduce high-fidelity Vulnerability Verification Evidence Packages (VVEP) with logs, traces, and reproducible exploit outcomesAI Assisted Code Patching & RemediationDeliver AI-generated secure code recommendations and automated patching workflowsIntegrate remediation automation into CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins)Penetration Test AugmentationSupport AI-enhanced penetration testing, API fuzzing, and dynamic security validationWork with security researchers to improve exploit reliability and detection precisionLeadership & Cross Functional CollaborationPartner with product teams and engineering leadership to ensure validated results drive actionable remediationCollaborate with two L3 engineers (pipeline automation and vulnerability analysis) and guide technical directionInfluence secure development practices and drive adoption of security automation across product teams
Must Haves:Bachelor’s degree in Computer Science, Computer Engineering, Cybersecurity, AI/ML, or a related field.Over 8 years of R&D experience, with a strong preference for AI and security‑focused work.Experience with secure coding, code reviews, and software architecture for security across multiple projects or product lines.Strong understanding of secure software development practices, threat modeling, and vulnerability management.Python proficiency, security automation frameworks, and CI/CD integration.Hands-on experience with SAST, DAST, SCA, container scanning, and other security tools.Familiarity with Git, CI/CD pipelines, build automation, and DevSecOps principles.Excellent leadership, communication, and cross‑team collaboration skills.Ability to operate independently, drive architectural decisions, and lead technical initiatives.
We Value:Master’s degree in Cybersecurity, Computer Science, or a related technical fieldHands‑on AI application development experiencePractical experience with AI/ML‑based security automation or Agentic AI/LLM integration (OpenAI, Azure OpenAI, HuggingFace)Exposure to cloud security or container/Kubernetes security in the context of software development pipelines.Experience with penetration‑testing methodologies (OWASP, PTES, MITRE ATT&CK)Working knowledge of Docker, Kubernetes, and modern CI/CD platformsA passion for continuous improvement and learningExcellent problem-solving skills and attention to detailProven ability to work cross-functionally and influence without authority