4. How to Use CodePeer in a Team

The most common use of CodePeer is as part of a regular quality control or quality assurance activity inside a team. In some cases, CodePeer is run every night on the current codebase, while in other cases it is run before every release on the codebase for the candidate release. In all these cases, CodePeer results need to be shared between team members, either for viewing results or for rerunning CodePeer locally and comparing these new results to the shared results. These various processes are supported by specific ways to run CodePeer and share its results.

In all cases, the source code should not be shared directly (say, on a shared drive) between developers, as this is bound to cause problems with file access rights and concurrent accesses. Rather, the typical usage is for each user to do a check out of the sources/environment, and use therefore her own version/copy of sources and project files, instead of physically sharing sources across all users.

The project file should also always specify a local, non shared, user writable directory as object directory (whether explicitly or implicitly, as the absence of an explicit object directory means the project file directory is used as object directory).

4.1. Possible Workflows

Multiple workflows allow use of CodePeer by a team:

  1. CodePeer is run on a server or locally, and textual or CSV results are shared in Configuration Management.
  2. CodePeer is run on a server, and textual results are sent to a third-party qualimetry tool (like GNATdashboard, SonarQube, SQUORE, etc.)
  3. CodePeer is run on a server or locally, and the CodePeer database is shared.

In the first two workflows, messages can be justified directly Through Pragma Annotate in Source Code, or indirectly with an ad-hoc mechanism independent of CodePeer. For example, in the first workflow, the expected output of CodePeer saved in Configuration Management can be compared to the output of subsequent runs, and in the second workflow, justifications can be entered in the qualimetry tool. The drawback of such ad-hoc mechanisms is that they may not be able to adapt to changes in source code that modify the line numbers in messages, while all three CodePeer-specific justifications (as documented in Reviewing Messages) can follow messages across line drifts.

The third workflow relies on a shared database to store all information on past runs of CodePeer. This allows performing comparisons between two runs of CodePeer (see Categorization of Messages), and justifying messages in three different ways (see Reviewing Messages).

The following sections describe in more details how the third workflow can be adapted to a project needs, for Viewing CodePeer Results, Reviewing Messages, Rerunning CodePeer Locally and Saving CodePeer Results in Configuration Management.

4.2. Viewing CodePeer Results

4.2.1. Accessing Results Remotely

CodePeer results can be accessed remotely by using a shared drive (like NFS or SMB). The typical usage is that the project file itself is not on the shared drive, but instead saved in a Configuration Management tool, such as subversion or git, and each developer does a check out of the sources/environment including the project file.

For everyone to access the same data on the server, the project file should point to this common area using the Database_Directory and Output_Directory project file attributes (see Project Attributes for more details). Only the Database_Directory really needs to be shared, the Output_Directory may be local to each developer machine. For example:

project Proj is

   for Object_Dir use "obj";  --  local directory

   package CodePeer is
      for Output_Directory use "proj.output";  --  or "/shared/proj.output";
      for Database_Directory use "/shared/proj.db";
   end CodePeer;

end Proj;

To access CodePeer results remotely, each user (or a script) needs to:

  • create the directory <obj dir>/codepeer/<project>.output where <obj dir> is the Object_Dir value specified in the project file and <project> is the base name of the project file in lower case, as it appears in the CodePeer output directory (for example project when the output directory is project.output).
  • copy in this new directory the following two files from the CodePeer output directory on the server:
    • Inspection_Info.xml
    • Output_Info.xml
  • launch gps -P <project>.gpr and then use CodePeer ‣ Display Code Review to display the CodePeer output, and navigate through messages and review them.

4.2.2. Copying Results Locally

CodePeer results can also be retrieved on a desktop machine. In order to copy the files locally, each user (or a script) needs to:

  • create an XML file called inspection_request.xml under the directory <obj dir>/codepeer containing the following:

    <?xml version="1.0"?>
    <database output_directory="<path/to/obj dir>/codepeer/<project>.output/" >
      <inspection
         status_file="<path/to/obj dir>/codepeer/review_status_data.xml"
         output_file="<path/to/obj dir>/codepeer/inspection_data.xml"
      />
    </database>
    

    where <path/to/obj dir> is the absolute path to the Object_Dir value specified in the project file and <project> is the base name of the project file in lower case, as it appears in the CodePeer output directory (for example project when the output directory is project.output).

  • call gps_codepeer_bridge on the server machine as follows:

    $ gps_codepeer_bridge <obj dir>/codepeer/inspection_request.xml
    

    This generates two XML files needed to view CodePeer results locally: inspection_data.xml and review_status_data.xml.

  • transfer, preserving the timestamps, the following files and directories from the <obj dir>/codepeer directory on the server to the desktop machine:

    • <project>.output
    • <project>.db
    • inspection_data.xml
    • review_status_data.xml

    It is critical that the timestamps of the XML files above are strictly more recent than the timestamp of Sqlite.db contained in the <project>.db directory.

  • launch gps -P <project>.gpr and then use CodePeer ‣ Display Code Review to display the CodePeer output, and navigate through messages and review them.

4.3. Reviewing Messages

User review of a message generated by CodePeer may determine either that the message is a false positive (i.e., the error situation CodePeer is warning about cannot actually occur) or is not an error (e.g., numeric overflow might be intentionally raised in some situations). In either case, such a message does not indicate a potential error in the code that requires further investigation. It is often useful to preserve the results of such review.

CodePeer provides two different mechanisms for capturing this information and associating it with the message in question. This can be accomplished either by interactively reviewing messages as described in Edit Message Window (Provide Message Review) for the HTML output and in Using the Locations View and Reviewing Messages for GPS, or by adding annotations in the form of pragmas to the Ada source code being analyzed as described in Through Pragma Annotate in Source Code. Each approach has its pros and cons.

Advantages of interactive review include:

  • No source code modifications are required; frozen source code can be reviewed.
  • Review does not perturb line numbering of sources, which in turn can affect the text of other messages.
  • Review can be performed by people not familiar with modifying Ada source code.
  • Review status values other than False_Positive and Intentional are available (e.g., Pending).

Advantages of adding pragmas to the source include:

  • Review is integrated with the sources and easier to relate to the sources.
  • Review is less likely to be invalidated by other source changes; the mapping from the review to the message being reviewed is more straightforward.
  • Existing editing and version control tools can be used to create and manage reviews.

The two techniques can be mixed, even within a single Ada unit.

4.3.1. Through CodePeer Web Server and HTML Output

As described in Edit Message Window (Provide Message Review), users can justify CodePeer messages from the HTML output, when accessing it by Running the CodePeer Web Server. The web server ensures the integrity of the SQLite database when serving multiple users.

4.3.2. Through a Shared Database and GPS Annotations

If users access CodePeer results on a shared drive (e.g. NFS or SMB) by pointing their local project file to this common area using the Database_Directory and Output_Directory project file attributes (see Project Attributes for more details), they can justify CodePeer messages from GPS, as described in Using the Locations View and Reviewing Messages.

The SQLite database integrity is ensured by fcntl() file locking mechanism on the shared drive, which should hence be working properly (note this is not properly supported by all NFS implementations).

4.3.3. Through Pragma Annotate in Source Code

By adding in your Ada source code a pragma Annotate of the form:

pragma Annotate (CodePeer, False_Positive|Intentional, "<check name>", "<review message>");

CodePeer will no longer display the corresponding message by default and will instead record a manual review in the database during the following run of CodePeer on the updated source code.

Note that this pragma is ignored by the compiler when generating code, it only affects CodePeer’s handling of generated messages.

When used in this way, an Annotate pragma takes exactly four arguments:

  1. The identifier CodePeer.
  2. One of two identifiers: False_Positive or Intentional.
    • False_Positive indicates a situation where the condition in question cannot occur but CodePeer was unable to deduce this
    • Intentional indicates that the condition can occur but is not considered to be a bug.
  3. A string literal matching one of the message kinds listed in the first four tables presented in Description of Messages. If an unrecognized string literal is supplied, the resulting error message includes a list of the available options.
  4. A string literal which is used as the comment associated with the review of this message in the database.

The placement of the pragma in the source determines the messages (of the kind specified by the third argument) that it applies to. The pragma applies to messages associated with the preceding item in a statement or declaration list (ignoring other Annotate pragmas); if no such preceding item exists, then the pragma applies to messages associated with the immediately enclosing construct (excluding any portion of that construct which occurs after the pragma).

For a message saying that a subprogram always fails, the pragma can be placed either after the definition of the subprogram or at the start of the declaration part of the subprogram.

For the following example:

procedure Throw_Exception (Progr_Error : Boolean) is
begin
   if Progr_Error then
      raise Program_Error;
   else
      raise Constraint_Error;
   end if;
end Throw_Exception;

CodePeer generates the following message:

throw_exception.adb:1:1: high warning: subp always fails throw_exception always ends with an exception or a non-returning call

One way to handle this situation is to justify the message by adding an Annotate pragma as follows:

procedure Throw_Exception (Progr_Error : Boolean) is
   pragma Annotate (CodePeer, Intentional, "subp always fails", "reviewed by John Smith");
begin
   if Progr_Error then
      raise Program_Error;
   else
      raise Constraint_Error;
   end if;
end Throw_Exception;

A better solution to this problem would be to use the pragma No_Return. Applying this pragma to the procedure Throw_Exception will prevent the display of the “subprogram always fails” message and in addition to this, it will provide a compiler check that there is no control-flow path that can reach the “end” of the procedure.

The message saying that a subprogram always fails may be emitted because of one or more messages saying that some errors always happen in the subprogram. Note that in this case, the “subprogram always fails” message must be explicitly justified by a dedicated pragma Annotate. The following example shows that to justify the message that a subprogram always fails, it is not enough to just justify the message about error(s) in the subprogram:

procedure Justified_Error_Inside (Progr_Error : Boolean) is
begin
   if Progr_Error then
      raise Program_Error;
   end if;
   pragma Assert (Progr_Error);  --  Justified error inside subprogram
   pragma Annotate (CodePeer, Intentional, "assertion", "reviewed by John Smith");
end Justified_Error_Inside;

CodePeer still generates the message indicating the subprogram always fails:

justified_error_inside.adb:1:1: high warning: subp always fails justified_error_inside fails for all possible inputs

See also Add message review pragmas for more examples.

4.4. Rerunning CodePeer Locally

4.4.1. When Results Were Copied Locally

When Copying Results Locally, rerunning CodePeer locally to compare with the results obtained on the server is similar to running CodePeer locally.

4.4.2. When the Database Is on a Shared Drive

When Accessing Results Remotely, users should have write access to the shared Database_Directory and the files under this directory. For rerunning CodePeer locally, the Output_Directory should not be on a shared drive. For example:

project Proj is

   for Object_Dir use "obj";  --  local directory

   package CodePeer is
      for Output_Directory use "proj.output";  --  but not "/shared/proj.output";
      for Database_Directory use "/shared/proj.db";
   end CodePeer;

end Proj;

4.4.3. When the Project File Is on a Shared Drive

If is also possible to put the project file itself on the shared drive, provided environment variables are used to specify a different object directory for each user, for example:

project Proj is

   type Environment is ("auto", "user");
   Env : Environment := External ("CODEPEER_ENV", "user");

   case Env is
      when "auto" =>
         for Object_Dir use "obj";
      when "user" =>
         for Object_Dir use External ("HOME") & "/obj";
   end case;

   package CodePeer is
      for Output_Directory use "proj.output";
      for Database_Directory use "/shared/proj.db";
   end CodePeer;

end Proj;

Then, automatic/scripted runs of CodePeer can use the auto version the project file:

codepeer -P proj.gpr -XCODEPEER_ENV=auto

while users will use by default the user version of the project file, whether from the command line:

codepeer -P proj.gpr

or from GPS or GNATbench IDEs.

4.5. Saving CodePeer Results in Configuration Management

This section describes the various options available to store CodePeer results in a Configuration Management tool, such as subversion or git so that this information can be retrieved at any point.

All the information concerning messages, annotations, and user comments is stored in the CodePeer database (file xxx.db/Sqlite.db, where xxx.db is the directory specified by the Database_Directory attribute in your project file, and is <project>.db by default when using GPS). In addition, CodePeer also needs to access the following files in the xxx.output directory in order to regenerate reports:

  • Inspection_Info.xml
  • Output_Info.xml
  • *msgs.xml
  • race_conditions.xml

From these files, you can regenerate output in various formats (HTML, text listings, GPS and GNATbench output) using the -output-only codepeer switch from the command line, or using the menu Regenerate Report. The above files can also be used to display the CodePeer report using the menu Display Code Review (*msgs.xml are actually not needed for this operation).

Note that this database file does NOT contain source files, so when running codepeer with (or without) -output-only, codepeer also needs to have access to the corresponding version of the source files.

In addition, the database is a historical database, which means it contains all previous analysis performed and allows for comparisons and changing the reference (baseline) run.

So one option is to save all these files as well as the corresponding source version number of your CM system.

Another option is to save in configuration management (instead or in addition) the various output produced by CodePeer (e.g. the HTML output, and/or the text output). See HTML Output and Text Output for more details.

A third option is to use the -output-msg-only or -output-msg CodePeer switches to generate and save the raw messages. For example the following command:

codepeer -P<project> -output-msg-only

will list all messages generated by CodePeer in a compiler like format. This output can be redirected to a file and put under configuration management.

See also Text Output for more relevant switches when using codepeer_msg_reader, in particular the -show-annotations and -show-manual-reviews switches.

Using -output-msg-only you can save in text format all the information you’d like to keep in configuration management.