This research proposal investigates how quantization techniques affect security vulnerability detection in Software Composition Analysis (SCA) across multiple programming languages. It introduces QuantSec, a systematic evaluation framework for assessing post-training quantization, quantization-aware training, and mixed-precision quantization on Large Language Models (LLMs) used for dependency security analysis. The study spans six languages with bit-widths from 2-bit to 16-bit precision, addressing cross-language detection accuracy and false negative rates for critical vulnerabilities. The work aims to provide practical guidelines for deploying secure, efficient quantized models in production DevSecOps pipelines.
Key findings
Quantization reduces model size by up to 8Γ but may introduce security vulnerabilities in vulnerability detection systems.
Cross-language vulnerability detection in polyglot codebases remains understudied in current quantization research.
The QuantSec framework characterizes trade-offs between computational efficiency and security detection performance across 2-bit to 16-bit precision configurations.
Proposed quantization-aware defense mechanisms are designed to maintain detection capabilities under low-bitwidth compression.
Limitations & open questions
Proposed research has not yet been empirically validated; findings depend on future experimental execution.
Focuses specifically on dependency security risks rather than broader code generation or general software engineering tasks.
Limited to static analysis approaches using LLMs, excluding dynamic or runtime vulnerability detection methods.