5. CodePeer Workflows

There are many different ways to take advantage of CodePeer capabilities, depending on the stage of the project’s lifecycle. During development phases, CodePeer can either be run every day or as often as there is a build of some part of the system. In this mode, using CodePeer to find defects before a project enters run time testing minimizes the number of crashes encountered during testing, allowing functional tests to run to completion and thereby providing more usable output for test engineers. On existing code being maintained, it can be used to find potential problems that are not revealed by testing and which can be extremely difficult to track under deployement. It can also be used to perform a change impact analysis. In high-integrity and certification-related environment, it can be used to demonstrate the absence of certain kind of errors or improve the quality of code review activities.

In all cases, the source code should not be shared directly (say, on a shared drive) between developers, as this is bound to cause problems with file access rights and concurrent accesses. Rather, the typical usage is for each user to do a check out of the sources/environment, and use therefore her own version/copy of sources and project files, instead of physically sharing sources across all users.

This section will describe in detail various ways to put CodePeer in production and give it to the hand of all team members, or only a few selected ones.

Note that these workflows assume familiarity with Getting the Right CodePeer Settings which explains how to get the proper setting for your specific needs.

5.1. Analyzing code locally prior to commit

In this workflow, a fast analysis of the code changes is done at each developer’s desk:

Developers run CodePeer on their desktop in compiler mode using the menu CodePeer ‣ Analyze File or CodePeer ‣ Analyze File by File in GPS after compilation and before testing. This menu performs an incremental analysis of only the files that are directly impacted by the change, similar to an incremental Ada build where only files whose dependencies have changed are recompiled. If developers keep their local set up and perform such a run after each change and after each configuration management update (e.g. svn update or git pull) then each analysis will be incremental and fast.

Alternatively, developers can perform a run of CodePeer at a suitable level for a fast analysis, which will typically be a level 1 or level 2 run, depending on the size and complexity of the code. Each developer maintains a local database which is used for comparison purposes, and each run is performed with the -baseline switch. Developers can the perform a run after their change, and concentrate on new messages marked Added from either the GPS or HTML interface by selecting the corresponding filter (check Added, uncheck Removed and Unchanged).

Then developers look at the results (produced in a matter of minutes at most) and check each issue reported by CodePeer and either:

Note that here, the codepeer runs are kept local, so the database only serves as a way to produce differences between two runs, and should not be used to e.g. store manual reviews which could not be shared with other team members.

5.2. Nightly runs on a server

In this workflow, CodePeer is run nightly on a dedicated server with lots of resources available (e.g. 24 cores as per System Requirements) at a suitable level (see Getting the Right CodePeer Settings) that allows all users to justify manually relevant messages via the CodePeer web server also running on this server. Note that messages already justified by developers through pragma Annotate in the code do not need to be justified again through the CodePeer web server.

These runs will typically be run nightly to take into account all commits of the day, and provide results to users the next morning.

The next day, developers can analyze the results either via the web interface or from GPS by accessing the database remotely, as per Accessing Results Remotely (IDE Server).

Developers then fix the code, or justify the relevant messages using either pragma Annotate or via the database (see Reviewing Messages).

Optionally: for each release, the results (database + necessary additional files) are committed under configuration management for traceability purposes as described in Saving CodePeer Results in Configuration Management.

5.3. Continuous runs on a server after each change

In this workflow, CodePeer is run on a dedicated server with lots of resources available (e.g. 24 cores as per System Requirements) at a level suitable for performing runs sufficiently rapidly as to provide quick results to users after each commit. The idea of these runs is not to be exhaustive, but to focus on the differences from the previous run.

These continuous runs will trigger on a new repository change, typically integrated with a continuous integration framework such as Jenkins (see Running CodePeer from Jenkins).

At the end of a run, a summary is sent to developers via email or a web interface. This summary can be generated via e.g:

codepeer -Pprj -output-msg-only -show-added | grep "[added]"

Developers then fix the code, or justify the relevant messages Through Pragma Annotate in Source Code or wait for the next nightly run to post a manual review Through CodePeer Web Server and HTML Output.

5.4. Combined desktop/nightly run

In this common workflow, a fast analysis of the code changes is done at each developer’s desk, and in addition a longer and more complete analysis is performed nightly on a powerful server.

This workflow is a combination of Analyzing code locally prior to commit and Nightly runs on a server.

5.5. Combined continuous/nightly run

In this other common worflow, a fast analysis of the code changes is done after each commit on a server in a continuous way, and in addition a longer and more complete analysis is performed nightly on a powerful server.

Alternatively, the nightly run is used as a baseline (via the -baseline switch) for the continuous runs.

This worflow is a combination of Analyzing code locally prior to commit and Continuous runs on a server after each change.

5.6. Combined desktop/continuous/nightly run

In this workflow, a fast analysis of the code changes is done at each developer’s desk, in addition an analysis (fast but potentially longer than the one performed by developers) is done after each commit on a server, completed by a more exhaustive analysis performed nightly on a powerful server.

This worflow is a combination of Analyzing code locally prior to commit, Nightly runs on a server and Continuous runs on a server after each change.

5.7. Software customization per project

In this workflow, you have a core version of your software that gets branched out or instantiated and modified on a per-project/mission basis. This customization involves typically some modifications and additions of source files to fit specific requirements.

Continuous solution: Review manually messages Through Pragma Annotate in Source Code to share them among all software variants. The main advantage of this approach is that merging of branches and analysis is performed entirely at the source level, using conventional configuration management tools. You can then perform separate CodePeer analyses on all active branches, and have separate teams analyze the results and put manual reviews via pragma Annotate on the relevant branch. The CodePeer database can then serve on each branches to compare successive runs, but is not used to store or share analysis (locally or between branches).

One shot solution: Copy the analysis (database) done from the core configuration when branching, which will import all existing manual reviews, and maintain it separately from there (in effect creating a fork that cannot be merged back). You can then perform separate CodePeer analyses on all active branches as in the Continuous solution above.

5.8. Compare local changes with master

In this workflow, you have a CodePeer analysis running on a shared server synchronized with the latest version of your sources. This CodePeer database (the gold database) typically gets updated when the sources are updated to create a new baseline run via the -baseline switch. You also have local users that are making changes to their code and would like to pre-validate them with CodePeer prior to doing a commit, in a separate sandbox and using the same analysis settings, unlike Analyzing code locally prior to commit.

This is best integrated via continuous integration: the local user creates a separate branch for his development and commit his change on this branch. A continuous builder (e.g. Jenkins) is monitoring user branches and will trigger an analysis that will:

  • Copy in a separate sandbox the CodePeer database from the reference (nightly) run.

  • Perform a CodePeer run with the same settings as the reference run

  • Send the results to the user either via its web server and the CodePeer HTML interface, or by generating a textual report via codepeer -Pproject -output-msg. This can be combined with the -show-added switch so that the user can concentrate on the new messages found. For example:

    codepeer -Pprj -output-msg -show-added | grep "[added]"
    
  • Throw out this separate sandbox

Once the user receives the report he can then address the findings by modifying the code, or adding a review Through Pragma Annotate in Source Code, or post an analysis on the gold database after his change is merged on the reference/master branch and a new baseline run is available for review.

Another, more manual alternative involves doing a local copy of the gold database to the user space, run CodePeer there, look at differences then throw out this local environment.

5.9. Multiple teams analyzing multiple subsystems

If you have a large software system composed of multiple subsystems, each maintained by different teams, then you want to map the CodePeer runs to these teams as follows:

Perform a separate CodePeer analysis for each subsystem, using a separate workspace and database. You can typically create one project file (.gpr) for each of these subsystems and run codepeer on each specific subsystem. To resolve dependencies between subsystem, use a limited with clause for each dependency in the project file, e.g:

limited with "subsystem1";
limited with "subsystem2";

project subsystem3 is
   [...]
end subsystem3;

The codepeer run will typically look like:

codepeer -Psubsystem1 --no-subprojects

The --no-subprojects switch tells codepeer to only analyze code local to the given subsystem and not code from other subsystems.

5.10. Use CodePeer to generate a security report

You can use CodePeer to perform a security oriented analysis and generate a separate report, taking in particular advantage of its Support for CWE.

This can be achieved via the --security-report switch, as explained in Security Report.

You can then use the generate HTML file codepeer-security-report.html either as is, or convert it to e.g. PDF, or include it in a larger report as part of your security assessment.