Code Repository to Threat Model: A Quick Guide

Code Repository to Threat Model: A Quick Guide

This guide shows you how to automatically analyze your codebase and send the results to SecureFlag ThreatCanvas for threat modeling.

Code Repository to Threat Model is a SecureFlag feature integrated into your CI/CD pipeline that automatically examines your repository structure and generates a ThreatCanvas threat model. It identifies security boundaries, components, and potential attack surfaces to help you better understand and secure your application.

How it Works

  1. Code Extraction: The runner scans your repository and extracts information about the architecture.

  2. AI Analysis: AI analyzes the structure and generates a summary that identifies components, boundaries, and integration points.

  3. Send to ThreatCanvas: The summary of the code repository is sent to ThreatCanvas, which builds a threat model. 

Data Protection & Privacy

Your code stays secure throughout the analysis process:
  1. Code extraction happens internally: All code scanning occurs within your CI environment. Your source code never leaves your infrastructure during the extraction phase.

  2. AI analysis uses your own LLM account: The AI analysis is performed using your organization-owned LLM account: you control the API key and data processing.

  3. No code is sent to SecureFlag: SecureFlag systems receive only the AI-generated architectural description of your repository, not your actual source code. This includes information such as component and function names, identified boundaries, and integration points, but not code implementation details.

This architecture ensures that sensitive source code remains within your control while still enabling powerful threat modeling capabilities in ThreatCanvas.

Supported AI Providers

Currently, Anthropic, Gemini, OpenAI, and Azure OpenAI providers are supported.

Quick Start

Prerequisites

Before you begin, make sure you have the following:
  1. Your SECUREFLAG_API_KEY
  2. Your SECUREFLAG_MODEL_UUID (the ThreatCanvas model you want to update)

Depending on the AI provider you choose, you’ll need to set the AI Provider variables (choose one):

Option 1 - Anthropic

  1.  ANTHROPIC_API_KEY: Anthropic API key

Option 2 - OpenAI

  1. OPENAI_API_KEY: OpenAI API key

Option 3 - Azure OpenAI

  1. AZURE_OPENAI_KEY: Azure OpenAI API key
  2. AZURE_OPENAI_ENDPOINT: Azure endpoint URL (e.g., https://your-resource.openai.azure.com/)
  3. AZURE_OPENAI_DEPLOYMENT: Azure deployment name

Option 4 - Google Gemini

  1. GEMINI_API_KEY: Gemini API key

Get the SecureFlag API Key

As Organization Admin
As User
As Organization Admin
Step 1: Log in to SecureFlag as an Organization Admin, and in the Management Portal, click the Settings icon in the top-right corner of the navigation bar.



Step 2: Scroll to the API Access Tokens section, and select Write threat models as the scope. Enter a name for the API access token, then click Generate. Be sure to save the token displayed in the modal window, as it will not be shown again.



As User
Step 1: Log in to the SecureFlag platform, then click the Settings button in the bottom-left corner, or click the icon with your initial in the top-right corner of the navigation bar.



Step 2: Click the Security tab and find the Generate API Access Token at the bottom. Select Write threat models as the scope. Enter a name for the API access token, then click Generate. Be sure to save the token displayed in the modal window, as it will not be shown again.



 

Get the Model UUID

Step 1: Sign in to SecureFlag and navigate to the ThreatCanvas dashboard.

Step 2: Open ThreatCanvas and either create a new blank model or select an existing model from your list.

Step 3: Configure the model settings as needed. To focus on secure coding concerns, select the “Secure Coding Implementation” risk framework.

Step 4: Click Share Model, choose Organization, and copy the UUID from the end of the URL. It will look something like this: 3f2a9c7e-8b41-4d2a-9f6e-1c7b5a92e4d1.



CI/CD Integration

The SecureFlag CI Runner is platform-agnostic and works with any CI/CD system that supports Docker. The examples below show GitHub Actions and GitLab CI configurations, but you can easily adapt these patterns to other platforms like Jenkins, CircleCI, Azure DevOps, Bitbucket Pipelines, or any custom CI/CD setup. 

Customize the Model

You can further customize the threat model directly in the SecureFlag platform by adjusting settings such as the selected risk frameworks. 

Certain configuration parameters can also be passed from your CI/CD pipeline. For example, you can control the level of detail in the generated diagram using environment variables like:
  1. SECUREFLAG_COMPONENT_LIMIT: Provides a hint on the expected number of nodes in the diagram, helping the AI determine how detailed the threat model should be.

  2. SECUREFLAG_REPO_PATH: Specifies an absolute path to restrict the analysis to a specific directory within the repository.

GitHub Actions Setup Example
GitLab CI Setup Example
GitHub Actions Setup Example
Use our GitHub Action saving this as .github/workflows/threatcanvas.yml:

# GitLab CI example for SecureFlag CI Runner
#
# Add this to your repository's .gitlab-ci.yml
name: Generate Threat Model with ThreatCanvas

on:
push:
tags:
- '*'

jobs:
threat-model:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: secureflag/actions/.github/actions/repo_to_threat_model@main
with:
SECUREFLAG_API_KEY: ${{ secrets.SECUREFLAG_API_KEY }}
SECUREFLAG_MODEL_UUID: ${{ vars.SECUREFLAG_MODEL_UUID }}
# AI Provider - choose ONE of the following options:
# Option 1 - Anthropic:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
# ANTHROPIC_MODEL: claude-sonnet-4-20250514 # optional
# Option 2 - OpenAI:
# OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
# OPENAI_MODEL: gpt-4o # optional
# Option 3 - Azure OpenAI:
# AZURE_OPENAI_KEY: ${{ secrets.AZURE_OPENAI_KEY }}
# AZURE_OPENAI_ENDPOINT: ${{ secrets.AZURE_OPENAI_ENDPOINT }}
# AZURE_OPENAI_DEPLOYMENT: ${{ secrets.AZURE_OPENAI_DEPLOYMENT }}
# AZURE_OPENAI_API_VERSION: 2024-02-15-preview # optional
# Option 4 - Gemini:
# GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
# GEMINI_MODEL: gemini-2.5-flash # optional

Configure GitHub Secrets and Variables:
  1. Go to your repository → Settings → Secrets and variables → Actions
  2. Add SECUREFLAG_API_KEY secret using your SecureFlag API key
  3. Add the AI Environment Variables as secrets depending on your AI provider.

After configuring this, you need to create a new tag and publish a new release.

In this example, the job runs automatically on any tag push (e.g., when you create a release).

You can read more about this GitHub Action here.

GitLab CI Setup Example
Add this to your repository's .gitlab-ci.yml:

# GitLab CI example for SecureFlag CI Runner
#
# Add this to your repository's .gitlab-ci.yml
#
# Required CI/CD variables (set as masked/protected):
# - SECUREFLAG_API_KEY: SecureFlag API authentication
#
# Required CI/CD variables:
# - SECUREFLAG_MODEL_UUID: SecureFlag model UUID
#
# AI Provider variables (choose one, set as masked/protected):
# Option 1 - Anthropic:
# - ANTHROPIC_API_KEY: Anthropic API key
# - ANTHROPIC_MODEL: (optional) Model name, default: claude-sonnet-4-20250514
# Option 2 - OpenAI:
# - OPENAI_API_KEY: OpenAI API key
# - OPENAI_MODEL: (optional) Model name, default: gpt-4o
# Option 3 - Azure OpenAI:
# - AZURE_OPENAI_KEY: Azure OpenAI API key
# - AZURE_OPENAI_ENDPOINT: Azure endpoint URL (e.g., https://your-resource.openai.azure.com/)
# - AZURE_OPENAI_DEPLOYMENT: Azure deployment name
# - AZURE_OPENAI_API_VERSION: (optional) API version, default: 2024-02-15-preview
# Option 4 - Gemini:
# - GEMINI_API_KEY: Google Gemini API key
# - GEMINI_MODEL: (optional) Model name, default: gemini-2.5-flash
#
# Optional CI/CD variables:
# - SECUREFLAG_REPO_PATH: Absolute path to restrict analysis to a directory (prepend with $CI_PROJECT_DIR)
# - SECUREFLAG_COMPONENT_LIMIT: (for 'model-repo') Hinted number of nodes in TC diagrams

stages:
- tests

secureflag_ci:
stage: tests
image: registry.gitlab.com/secureflag-community/sf-runner:latest

variables:
SECUREFLAG_COMMANDS: model-repo

script:
- /app/entrypoint.sh

rules:
# Run on tags
- if: $CI_COMMIT_TAG
# Run on schedules
- if: '$CI_PIPELINE_SOURCE == "schedule"'

allow_failure: true

Configure GitLab CI/CD Secrets and Variables:
  1. Go to your project → Settings → CI/CD → Variables
  2. Add SECUREFLAG_API_KEY secret using your SecureFlag API key
  3. Add the AI Environment Variables as secrets depending on your AI provider.
After configuring this, you need to create a new tag and publish a new release.

In this example, the job runs automatically on:
  1. Any tag push (e.g., when you create a release)
  2. Scheduled pipelines (configure in CI/CD → Schedules)

 

Automatic Updates

After the setup is complete, each execution of the specific CI job will automatically update the ThreatCanvas model using AI-driven analysis.




Any changes made are shown in the Model Revisions section. You can access this by clicking the Revisions icon in the toolbar and then selecting View Changes next to the model.



    • Related Articles

    • SecureFlag Findings2Training Extension for VS Code

      SecureFlag Findings2Training is a Visual Studio Code extension that watches for security issues in your code workspace and automatically recommends the right training articles and hands-on practice labs to help you understand and fix them. ...
    • SecureFlag Analyzer Extension for VS Code

      AI-powered vulnerability detection, right in your IDE. Overview The SecureFlag Analyzer extension integrates into VS Code to deliver real-time security analysis as you code. Powered by advanced LLMs (Anthropic and ChatGPT), it detects potential ...
    • SecureFlag ThreatCanvas for Azure

      Threat model your features with AI-powered tooling. The SecureFlag ThreatCanvas plugin helps you integrate security early in your development lifecycle by automatically generating threat model diagrams from your Azure Boards work item descriptions. ...
    • SecureFlag Findings2Training Plugin for IntelliJ IDEA

      SecureFlag Findings2Training is an IntelliJ IDEA plugin that watches for security issues in your project and automatically recommends the relevant training articles and hands-on practice labs to help you understand and fix them. Prerequisites Before ...
    • SecureFlag ThreatCanvas for Jira

      AI-powered threat modelling -- for Jira Cloud and Jira Data Center! SecureFlag ThreatCanvas for Jira Cloud and Jira Data Center enables developers to easily generate threat models from issues describing new features or changes to be made. ...