Linux Kernel Maintainers Adopt New AI‑Assisted Code Policy, Mandating Human Liability and “Assisted‑by” Attribution
What Happened — Linus Torvalds and the Linux kernel maintainers have published the project’s first formal policy governing AI‑generated contributions. The rules prohibit AI agents from adding Signed‑off‑by tags, require an “Assisted‑by” attribution line that names the model and tools used, and place full legal and security responsibility on the human submitter.
Why It Matters for TPRM —
- Third‑party developers and vendors that supply kernel‑level code must adjust their contribution workflows to remain compliant.
- Failure to disclose AI assistance could expose organizations to licensing violations, warranty disputes, or undiscovered security flaws.
- The policy creates a new audit‑ready metadata field (“Assisted‑by”) that can be leveraged in supply‑chain risk assessments.
Who Is Affected — Open‑source contributors, hardware vendors, cloud‑infrastructure providers, and any third‑party that integrates or ships Linux kernel patches.
Recommended Actions —
- Update internal contribution guidelines to include the “Assisted‑by” tag and prohibit AI‑generated Signed‑off‑by entries.
- Conduct a compliance review of recent kernel patches for undisclosed AI assistance.
- Add the new policy to vendor risk questionnaires and contract clauses for any partner delivering kernel‑level code.
Technical Notes — The policy does not reference a specific CVE; it addresses process risk rather than a technical vulnerability. It targets AI‑assisted code generation tools (e.g., large language models, code‑completion assistants) and enforces human‑only certification via the Developer Certificate of Origin (DCO). Source: ZDNet Security