3. How to View CodePeer Output

We recommend using GPS to view CodePeer messages, because this IDE provides the best interactive experience and powerful source-navigation capabilities not available in, for example, the HTML report, which can help review and understand messages more easily.

3.1. Viewing CodePeer Output in GPS

3.1.1. CodePeer Report Window

When you open a CodePeer report (using one of the CodePeer menu items, see Running CodePeer from GPS), the locations view is filled with messages from the CodePeer run, and the following report window is displayed:

_images/codepeer-report.jpg

On the top right of the report window the run id of the current (latest) run is displayed, and on the top left either the run id of the baseline run of the current analysis level or the run id of the cutoff run if one was specified (respectively Current run #2 and Base run #1 in the screenshot, here the current run is the second run at this analysis level). Below the run ids you will also get the date and time when CodePeer performed the review. If you place your mouse above this part, a tooltip will display the CodePeer switches that were used for the current run. This can be useful to remind you of any specific option used, like the level of the analysis.

The report window has two tabs: Messages and Race conditions. The first tab provides a summary of all messages and filtering capabilities, and the second provides a summary of potential race conditions found.

On the left side of the messages window, there is a main area listing all files for which messages have been generated, with information organized hierarchically by project, file and subprograms. You can click on any file to display the first message on this file in the locations view. See Using the Locations View and Reviewing Messages for more details on the use of the locations view.

Similarly, you can double click on any file or subprogram to jump to the corresponding source editor, which will be displayed with their corresponding CodePeer annotations.

For each of these entities, three columns displaying the number of high, medium and low messages corresponding to the current filter selection (see Using Filters).

In addition for projects and source files, two additional columns are available, displaying the Passed checks (number and percentage of checks that CodePeer has analyzed automatically without finding any potential run-time error) and the Total checks (total number of checks verified by CodePeer).

Different kinds of filters are available on the right side of the window. Clicking on each of these filters will show/hide the corresponding messages in the locations view. This way, you can easily concentrate on e.g. high ranking messages only, or on a specific category (e.g. validity check) during your review of the messages.

The filters are divided into four kinds:

  • Message categories

    This section (the first column on the right of the file summary) lists all the message categories found in the current analysis in two groups: warnings and checks. In addition, it lists all CWEs found in the current analysis when displaying of CWEs is enabled. By default, all categories found are enabled, and you can select/unselect all categories in each group at once by clicking on the check box left to the Warning categories (or Check categories, or CWE categories) label, or individually by checking each item. See Description of Messages for more details on the meaning of each category.

    CWE categories filter applies for all kinds of messages. When some CWE is selected in filter all messages with this CWE are displayed (even when not selected in Warning categories and Check categories filters). Precondition messages are displayed when related checks has selected CWE.

  • Message history

    By default, added and unchanged messages are displayed (with respect to baseline). You can also select old removed messages that are no longer present in the last run, or concentrate on added messages only and ignore unchanged messages. This is particularly useful when using CodePeer on legacy code, without having to review previously found messages, and concentrate on messages found after new changes, to analyze the impact of these new changes.

  • Message ranking

    By default, GPS only displays the most interesting messages (ranked medium and high). Once you have reviewed these messages, reviewing the low messages can be useful. It can also help understanding some messages to display the low and/or informational messages, which can provide additional information.

  • Message review status

    By default, GPS displays messages with the uncategorized, pending and bug review statuses. You can change this default setting by checking or unchecking the corresponding boxes (in particular to view not a bug, false positive and intentional review statuses).

On the top of the race conditions window, there is a list of shared objects. Clicking on shared object will open a list of entry points in the bottom of the race conditions window and automatically scroll the Locations window to simplify access to locations where shared objects are used.

On the bottom side of the race conditions window, a list of entry points and kind of access is displayed for the currently selected shared object. Clicking on row of this view opens source editor at scroll it to point of access to shared object.

3.1.2. Using the Locations View and Reviewing Messages

When you open a CodePeer report (see Running CodePeer from GPS), the locations view at the bottom part of GPS is filled with messages from the CodePeer run. For example:

_images/codepeer-locations.jpg

You can click on any of these messages, and GPS will open a source editor containing the relevant source file, at the listed source location. You can also use the Filter panel available from the contextual menu in the locations view in order to display only messages containing a specific text pattern.

When a precondition related message is displayed, extra information may be provided by CodePeer as secondary locations associated with the message, giving source locations that are related to the precondition check such as calls forming a call stack, as well as actual checks (e.g. range check) related to the precondition message. Each of these secondary locations can be clicked to see the corresponding source code and ease understanding these messages. In other words, these secondary locations are places where CodePeer learned about the range of values of the variables or expressions mentioned in the message, and in particular, provides the call stack that would lead to the selected potential precondition failure.

This capability replaces the Backtraces view which was provided in previous versions.

For more details on how to use the locations view, see the GPS documentation directly, which explains how this view is managed.

In addition, an Edit icon is displayed in front of each CodePeer message. Clicking on this icon allows posting a manual review of the message. It is possible to review single or multiple messages at once. See Reviewing Messages for more information about use of message review dialogs.

3.1.3. Reviewing Messages

There are two forms of CodePeer message review dialog: one to review single messages and another to review multiple messages. Which dialog will be used depends on how many messages are selected in the Locations view before clicking on the edit icon.

_images/codepeer-single_review.jpg _images/codepeer-multiple_review.jpg

The New Status drop-down box allows selecting the review status of the message. The review status is initially set to Uncategorized and can be set to Pending, Not A Bug, Intentional, False Positive, or Bug.

The Approved By text box allows the name of the reviewer to be recorded.

The Comment text box allows the reviewer to enter an explanation/justification for the message.

The single-message review dialog displays a history of the changes related to that message, while the multiple-message review dialog displays a list of messages to be reviewed along with their ranking and review status.

3.1.4. Reviewing CodePeer Annotations

CodePeer generates as-built documentation for each subprogram that it analyzes. This documentation is presented in the form of virtual comments at the beginning of each subprogram in a source-editor window. This as-built documentation includes annotations in the form of Preconditions, Presumptions, Postconditions, etc. which characterize the functioning of the subprogram, as determined by a global static analysis of the subprogram and the routines that it calls, directly or indirectly. For more details on the form of these annotations, see Description of Annotations. For more details on using these annotations as part of code review, see Use Annotations for Code Reviews.

The virtual comments are not actually added to your source code; they are only visible when viewing the source in a GPS source-editor window. You may hide the annotations using the CodePeer ‣ Hide annotations item in the source editor’s contextual menu. You may display them again using the contextual menu item CodePeer ‣ Show annotations.

3.1.5. Using the source editor’s tooltip to display values

Tooltips in the GPS source editor display the set of possible values for objects under the mouse cursor, if this information is available.

This can be disabled by the CodePeer ‣ Display values item in the source editor’s contextual menu. When disabled, GPS displays the default contents of the tooltip.

3.1.6. Source Navigation

Using the GPS source navigation can be very useful for understanding CodePeer messages, verifying that messages are indeed relevant, and modifying the source code.

In particular, using the Find all references and Find all local references contextual menu, as well as Goto body will help significantly with reviewing messages.

In some cases navigation information might not be available. In this case you might see these options indicated as “best guess” and clicking on them might not do anything. How the data needed for source navigation are generated is controlled by the GPS “Build mode” scenario variable. You can check the value of this variable by selecting menu item Tools/Views/Scenario and then looking in the Scenario window (typically on the left side of the screen) for the “Build mode” variable. It should be in “codepeer” mode if you want to use source navigation without building using GNAT.

Overall there are three possibilities:

  1. You have only CodePeer installed (and not GNAT).
  2. You have both CodePeer and GNAT installed, but you want to use CodePeer to generate your cross-reference information;
  3. You have both CodePeer and GNAT installed, but you want to build using GNAT to generate your cross-reference information.

In cases (1) and (2), the Build mode should be “codepeer.” In case (3), the Build mode should be “default” or anything but “codepeer.”

For more information on GPS source-navigation capabilities, see the GPS User’s Guide.

3.1.7. Accessing Results Remotely (IDE Server)

CodePeer results can be accessed remotely by using the CodePeer IDE Server, which is implemented as a standalone web server underneath.

The CodePeer analysis is performed on a server, and the CodePeer IDE Server is run on it after analysis:

$ codepeer -P<project> --ide-server --port=8080

The default port is 8080, so you can omit this optional value.

To stop the server you need to press Ctrl-C, which will clean up the lock file and cleanly shut down the server.

Note that the CodePeer database is locked when the server is running. You need to stop it before doing a new analysis.

Each developer does a check out of the sources of the project locally and use GPS to display analysis information and to review messages.

The project file should provide the URL to access to CodePeer server using Server_URL attribute:

package CodePeer is
   for Server_URL use "http://server:8080";
end CodePeer;

where server should be replaced by the host name running the CodePeer server.

Note that when specifying a Server_URL, GPS switches to a review mode for CodePeer results, and will disable menus to perform a new CodePeer analysis since performing a CodePeer analysis in remote mode is not supported (the run needs to be performed directly on the server).

3.1.8. Accessing Results Remotely (shared database)

CodePeer results can be accessed remotely by using a shared drive (like NFS or SMB). The typical usage is that the project file itself is not on the shared drive, but instead saved in a Configuration Management tool, such as subversion or git, and each developer does a check out of the sources/environment including the project file.

For everyone to access the same data on the server, the project file should point to this common area using the Database_Directory and Output_Directory project file attributes (see Project Attributes for more details). Only the Database_Directory really needs to be shared, the Output_Directory may be local to each developer machine. For example:

project Proj is

   for Object_Dir use "obj";  --  local directory

   package CodePeer is
      for Output_Directory use "proj.output";  --  or "/shared/proj.output";
      for Database_Directory use "/shared/proj.db";
   end CodePeer;

end Proj;

To access CodePeer results remotely, each user (or a script) needs to:

  • create the directory <obj dir>/codepeer/<project>.output where <obj dir> is the Object_Dir value specified in the project file and <project> is the base name of the project file in lower case, as it appears in the CodePeer output directory (for example project when the output directory is project.output).
  • copy in this new directory the following two files from the CodePeer output directory on the server:
    • Inspection_Info.xml
    • Output_Info.xml
  • launch gps -P <project>.gpr and then use CodePeer ‣ Display Code Review to display the CodePeer output, and navigate through messages and review them.

3.1.9. Copying Remote Results Locally

CodePeer results can also be retrieved on a desktop machine. In order to copy the files locally, each user (or a script) needs to:

  • call codepeer_bridge on the server machine as follows:

    $ codepeer_bridge --output-dir=<path/to/obj dir>/codepeer/<project>.output
    

    where <path/to/obj dir> is the absolute path to the Object_Dir value specified in the project file and <project> is the base name of the project file in lower case, as it appears in the CodePeer output directory (for example project when the output directory is project.output).

    This will generate two XML files needed to view CodePeer results locally: inspection_data.xml and review_status_data.xml, under obj dir/codepeer.

  • transfer, preserving the timestamps, the following files and directories from the <obj dir>/codepeer directory on the server to the desktop machine:

    • <project>.output
    • <project>.db
    • inspection_data.xml
    • review_status_data.xml

    It is critical that the timestamps of the XML files above are strictly more recent than the timestamp of Sqlite.db contained in the <project>.db directory.

  • launch gps -P <project>.gpr and then use CodePeer ‣ Display Code Review to display the CodePeer output, and navigate through messages and review them.

3.2. HTML Output

3.2.1. HTML generation

In order to generate HTML output, you need to specify either -html or -html-only.

For example, in order to perform both a codepeer analysis and generate HTML:

$ codepeer -Pprj -html

If you have already performed a codepeer analysis, then use the following command to generate HTML only without performing a new analysis:

$ codepeer -Pprj -html-only

Note that all the switches relevant to -output-msg can also be used for HTML generation. See Text Output for more details. For example you can use:

$ codepeer -Pprj -html -show-all -cutoff 3

3.2.2. Browser Requirements

The output of CodePeer may be browsed via recent versions of any browser, and requires that your browser’s security level is set to enable JavaScript.

3.2.3. Location and Format of Output

The HTML output is generated by a tool called gnathub, part of your codepeer installation, in a directory called gnathub/html-report, located under the object directory of your project. In order to view the HTML output, you first need to launch the CodePeer web server.

3.2.4. Running the CodePeer Web Server

CodePeer comes with a web server, which must be run after an analysis to serve the HTML Output and to allow users to manually review messages. You can run the CodePeer web server as follows:

$ codepeer -Pprj --web-server --port=8080

The default port is 8080, so you can omit this optional value.

You can then open a browser and specify a URL of the form http://localhost:8080/. localhost will work if you are using the same machine for the web server and for the web browser. Otherwise you will need to replace localhost with the name of the machine where the web server is launched.

To stop the web server you need to press Ctrl-C, which will cleanly shut down the server.

3.2.5. Message Review

In order to review messages manually and provide audit information, you can select messages in the File messages or All messages area at the bottom of the source view. Clicking on one message will select it. You can then further extend the selection by pressing the Ctrl key and clicking.

Once you’ve selected all the messages that you want to review at the same time, click the Add review button.

A Message Review window will appear. The top area displays all the messages selected, with their current status.

The New status drop-down box allows selecting the new review status of selected messages. The review status is initially set to Uncategorized and can be set to Pending, Not A Bug, False Positive, Intentional or Bug.

The Approved by text area allows the name of the reviewer to be recorded.

The Comment text area allows the reviewer to enter an explanation/justification for a message (e.g., explain why the construct flagged by the message is not actually a bug).

The Save button will save this review information in the database via the web server.

3.3. Viewing CodePeer Output in GNATbench

3.3.1. CodePeer Problem Details View

When you run CodePeer (using one of the CodePeer menu items, see Running CodePeer from GNATbench), the CodePeer Problem Details view is filled with messages from the CodePeer run, and the following view is displayed:

_images/codepeer_problem_details.jpg

The messages generated by CodePeer are shown in this view. The messages are associated with source files. For each filename, the message shows the line and column number, message ranking (high/medium/low) and a brief description of the problem. If you click on the message in the CodePeer Problem Details view, the corresponding message will be highlighted in the editor window.

Different kinds of filters are available on the top right side of the bottom window. Click on the View menu icon (inverted triangle icon) and then select Filter By submenu. You will see the following list of filters:

  • High ranking messages
  • Medium ranking messages
  • Low ranking messages
  • Warnings
  • Show All

Clicking on each of these filters will show the messages of that ranking in the CodePeer Problem Details view. At present the filters support filtering the messages by message ranking (high/medium/low) and showing messages corresponding only to warnings.

In the editor window the problems are identified by special markers (red flag for high ranking messages, orange flag for medium ranking messages and yellow flag for low ranking messages). If you hover over these markers, you can see the details of the message.

3.3.2. Reviewing CodePeer Messages in GNATbench

To review CodePeer messages, select the message in the CodePeer Problem Details View by double clicking on the message. A dialog window titled “CodePeer Message Review” would pop up. You can change the status of the message from the “New Status” pull down menu. Also you can add the name of the person reviewing the message in the “Approved By” text field and can add comments in the “Comment” text box. You can modify any of the fields. To save the changes, press OK. Below is the screen shot of the “CodePeer Message Review” screen.

_images/codepeer_messagereview.jpg

You can see a history of the changes related to that message in the “Audit trail” text area.

3.3.3. CodePeer Annotation Details View

When CodePeer finishes its analysis, it also creates annotation markers for the sources. The editor ruler will show the annotation markers on the first line of every method. Clicking on an annotation marker, brings up the CodePeer Annotation Details view which looks as shown below.

_images/codepeer_annotations.jpg

This view displays the preconditions and postconditions that characterize a method. It also shows any unanalyzed calls for the method, and any presumptions made about their effects.

3.3.4. Race Condition Details View

When the user right clicks on a project and selects Codepeer ‣ View Race Condition menu,it opens the Race Condition Details view in the bottom pane.

There is a table of potentially shared objects on the bottom left side of the view. Clicking on a shared object will update the “Access Summary” table and “Access Details” tree view for the selected object. By default both “Access Summary” table and “Access Details” tree show the entries for the first object in the “Potentially Shared Objects” table.

The “Access Summary” table lists the entry points and kind of accesses for the currently selected object.

The “Access Details” tree view lists all the references to the object organized by access type (read/update). It also lists the filename and line number where the object is accessed. Clicking on an entry in the “Access Details” view will highlight the particular line in the file in the editor window. The Race Condition Details view looks like as shown below:

_images/codepeer_racereport.jpg

All the tables in this view can be resized.

3.4. Reviewing Messages

User review of a message generated by CodePeer may determine either that the message is a false positive (i.e., the error situation CodePeer is warning about cannot actually occur) or is not an error (e.g., numeric overflow might be intentionally raised in some situations). In either case, such a message does not indicate a potential error in the code that requires further investigation. It is often useful to preserve the results of such review.

CodePeer provides two main different mechanisms for capturing this information and associating it with the message in question.

The first mechanism allows interactively reviewing messages via either:

The second mechanism consists of adding annotations in the form of pragmas to the Ada source code being analyzed as described in Through Pragma Annotate in Source Code.

Each approach has its pros and cons.

Advantages of interactive review include:

  • No source code modifications are required; frozen source code can be reviewed.
  • Review does not perturb line numbering of sources, which in turn can affect the text of other messages.
  • Review can be performed by people not familiar with modifying Ada source code.
  • Review status values other than False_Positive and Intentional are available (e.g., Pending).

Advantages of adding pragmas to the source include:

  • Review is integrated with the sources and easier to relate to the sources.
  • Review is less likely to be invalidated by other source changes; the mapping from the review to the message being reviewed is more straightforward.
  • Existing editing and version control tools can be used to create and manage reviews.

The different techniques can be mixed, even within a single Ada unit.

In addition, it is also possible and sometimes desirable to improve the code specification to clarify the intent of the code and at the same time eliminate false alarms, as explained in Improve Your Code Specification.

3.4.1. Through CodePeer Web Server and HTML Output

As described in Message Review, users can justify CodePeer messages from the HTML output, when accessing it by Running the CodePeer Web Server. The web server ensures the integrity of the SQLite database when serving multiple users.

3.4.2. Through Pragma Annotate in Source Code

By adding in your Ada source code a pragma Annotate of the form:

pragma Annotate (CodePeer, False_Positive|Intentional, "<check name>", "<review message>");

CodePeer will no longer display the corresponding message by default and will instead record a manual review in the database during the following run of CodePeer on the updated source code.

Note that this pragma is ignored by the compiler when generating code, it only affects CodePeer’s handling of generated messages, so you can safely add it to your source code without impacting compilation or runtime effect. If for some reason pragma Annotate would be both recognized and not handled the same way by your Ada compiler, then you can safely replace pragma Annotate by pragma Gnat_Annotate.

When used in this way, an Annotate pragma takes exactly four arguments:

  1. The identifier CodePeer.
  2. One of two identifiers: False_Positive or Intentional.
    • False_Positive indicates a situation where the condition in question cannot occur but CodePeer was unable to deduce this
    • Intentional indicates that the condition can occur but is not considered to be a bug.
  3. A string literal matching one of the message kinds listed in the first four tables presented in Description of Messages. If an unrecognized string literal is supplied, the resulting error message includes a list of the available options.
  4. A string literal which is used as the comment associated with the review of this message in the database.

The placement of the pragma in the source determines the messages (of the kind specified by the third argument) that it applies to. The pragma applies to messages associated with the preceding item in a statement or declaration list (ignoring other Annotate pragmas); if no such preceding item exists, then the pragma applies to messages associated with the immediately enclosing construct (excluding any portion of that construct which occurs after the pragma).

For a message saying that a subprogram always fails, the pragma can be placed either after the definition of the subprogram or at the start of the declaration part of the subprogram.

For the following example:

procedure Throw_Exception (Progr_Error : Boolean) is
begin
   if Progr_Error then
      raise Program_Error;
   else
      raise Constraint_Error;
   end if;
end Throw_Exception;

CodePeer generates the following message:

throw_exception.adb:1:1: medium warning: subp always fails throw_exception always ends with an exception or a non-returning call

One way to handle this situation is to justify the message by adding an Annotate pragma as follows:

procedure Throw_Exception (Progr_Error : Boolean) is
   pragma Annotate (CodePeer, Intentional, "subp always fails", "reviewed by John Smith");
begin
   if Progr_Error then
      raise Program_Error;
   else
      raise Constraint_Error;
   end if;
end Throw_Exception;

A better solution to this problem would be to use the pragma No_Return. Applying this pragma to the procedure Throw_Exception will prevent the display of the “subprogram always fails” message and in addition to this, it will provide a compiler check that there is no control-flow path that can reach the “end” of the procedure.

The message saying that a subprogram always fails may be emitted because of one or more messages saying that some errors always happen in the subprogram. Note that in this case, the “subprogram always fails” message must be explicitly justified by a dedicated pragma Annotate. The following example shows that to justify the message that a subprogram always fails, it is not enough to just justify the message about error(s) in the subprogram:

procedure Justified_Error_Inside (Progr_Error : Boolean) is
begin
   if Progr_Error then
      raise Program_Error;
   end if;
   pragma Assert (Progr_Error);  --  Justified error inside subprogram
   pragma Annotate (CodePeer, Intentional, "assertion", "reviewed by John Smith");
end Justified_Error_Inside;

CodePeer still generates the message indicating the subprogram always fails:

justified_error_inside.adb:1:1: medium warning: subp always fails justified_error_inside fails for all possible inputs

Similarly, for the following example:

function Func return Integer is
   X, Y : Integer range 1 .. 10000 := 1;
begin
   for I in 1 .. 123 loop
      X := X + ((3 * I) mod 7);
      Y := Y + ((4 * I) mod 11);
   end loop;
   return (X + Y) / (X - Y);
end Func;

CodePeer generates the following message:

func.adb:8:24: medium: divide by zero might fail: requires X - Y /= 0

As it happens, this message is a false positive; the function will always safely return -4, but CodePeer is unable to deduce this fact.

One way to handle this situation is to justify the message by adding an Annotate pragma.

Consider adding a pragma as follows:

function Func return Integer is
   X, Y : Integer range 1 .. 10000 := 1;
begin
   for I in 1 .. 123 loop
      X := X + ((3 * I) mod 7);
      Y := Y + ((4 * I) mod 11);
   end loop;
   return (X + Y) / (X - Y);
   pragma Annotate (CodePeer, False_Positive,
                    "Divide By Zero", "reviewed by John Smith");
end Func;

With this modification, CodePeer displays no message for this example.

In addition, if manual reviews are displayed (for example, if codepeer is invoked with the -output-msg-only -show-reviews-only switches), the following is displayed:

func.adb:8:24: suppressed: divide by zero might fail: requires X - Y /= 0
   Review #1: false_positive: Approved by Annotate pragma at 9:4: reviewed by John Smith

3.4.3. Through a Shared Database

If users access CodePeer results on a shared drive (e.g. NFS or SMB) by pointing their local project file to this common area using the Database_Directory and Output_Directory project file attributes (see Project Attributes for more details), they can justify CodePeer messages from GPS, as described in Using the Locations View and Reviewing Messages or from GNATbench, as described in Reviewing CodePeer Messages in GNATbench.

The SQLite database integrity is ensured by fcntl() file locking mechanism on the shared drive, which should hence be working properly (note this is not properly supported by all NFS implementations).

3.4.4. Through a CSV File

Another possibility for users to review messages is to export the messages via the CSV Output and then edit in e.g. a spreadsheet the CSV file to add manual reviews.

In order to store a review in a CSV file, you will need to fill the following columns (fields) for a given row (message):

Has_Review This field must be set to TRUE
Classification This field needs to be set to one of the following values: pending, false_positive, not_a_bug, bug.
Timestamp This is an optional field to set the date of the review. This field will be set automatically if left empty with the date of the codepeer_bridge run, see below.
Approved By This is an optional field to set the name of the review approver.
Comment This is an optional field to set the review comment.

The other fields must be kept unchanged.

Once the CSV file has been modified and saved (in the same CSV format), you can use codepeer_bridge on the server machine where codepeer is running and where the CodePeer database is located, e.g:

codepeer_bridge -v --output-dir=<obj dir>/codepeer/<project>.output --import-reviews=<file.csv>

See Exporting/Importing User Reviews for more details on the --import-reviews switch.

3.4.5. Improve Your Code Specification

In some cases, it may be better to annotate the Ada code directly additional information to help going further in the analysis and help also clarify the intent of the code.

Ada provides various ways to improve the specification of the code and thus allowing static-analysis tools to reason on additional information. For example:

  • explicit range given on types, subtypes and variables
  • explicit assertions written in the code via pragma Assert or pragma Assume
  • pre and post conditions
  • predicates and invariants

3.4.5.1. Using pragma Assert and Assume

For both pragmas Assert and Assume (as with any run-time check), analysis of subsequent code assumes the success of the preceding run-time check. pragma Assume can, therefore, be used to prevent generation of unwanted messages about subsequent code. No message will be generated about a possible division by zero in the following example (provided that CodePeer is unable to deduce that failure of the assumption is a certainty):

declare
   Num : Natural := Some_Function;
   Den : Natural := Some_Other_Function;
   pragma Assume (Den /= 0);
   Quo : Natural := Num / Den;
begin
   ...
end;

In this respect, pragma Assume can be viewed as an alternative to pragma Annotate (CodePeer, False_Positive, ...);. It is important to be aware of two important differences between these two approaches to avoiding unwanted messages. First, an Assume pragma (unlike an Annotate pragma) can affect the run-time behavior of a program - an Annotate pragma has no effect on run-time behavior. Second, an incorrect Assume pragma can invalidate CodePeer’s analysis, possibly to an extent that is not obvious, while an Annotate pragma has no effect on CodePeer’s analysis of subsequent constructs. The user-visible effects (with respect to both run-time behavior and to CodePeer’s analysis) of an Assume pragma are more like those of an Assert pragma followed by an Annotate pragma that suppresses any message about the possibility that the assertion might fail.

3.4.5.2. Using Pre and Post conditions

Taking the GPS demo located under <GPS install>/share/examples/gps/demo as an example, and running it under codepeer via:

codepeer -P demo -level 1 -output-msg

this will generate in particular a potential array index check failure. Instead of hiding the error, we’re going to extend the contract of the subprogram Get_Char so that we can make sure the callers are correct.

First, let’s look more closely at the subprogram itself:

function Get_Char return Character is
   C : Character;
begin
   --  First check if the line is empty or has been all read.

   if End_Line then
      Read_New_Line;
   end if;

   C := Line (First_Char);
   First_Char := First_Char + 1;

   return C;
end Get_Char;

The potential error occurs on the access to the element of line, where First_Char might not be smaller or equal to 1024. So instead of removing this error, we’re going to provide as a precondition of Get_Char the fact that the value has to be within expected range:

function Get_Char return Character;
pragma Precondition (First_Char <= 1_024);

Note that the above is using a pragma so that this syntax can be used with any Ada compiler (the pragma will basically be ignored by non GNAT compilers).

If you are using an Ada 2012 compiler then the following syntax is preferred:

function Get_Char return Character
  when Pre => First_Char <= 1_024;

Re-running CodePeer will remove the error. Now First_Char is assumed to be below 1024 and thus there is no potential error. Note that this pragma doesn’t have any effect in the executable code by default - it’s just here for documentation purposes and used by the additional tools. However, when using the GNAT compiler, a corresponding dynamic check can be added when e.g. compiling with the -gnata switch.

It would be interesting to understand the effects of this precondition in its caller. At level 1, the analysis does not take any call graphs into account, so no additional problems are spotted. However, when increasing the level of analysis, specifically to level 2, an additional potential run-time error will be spotted on this piece of code:

procedure Skip_Spaces is
   Current_Char : Character;
begin
   loop
      Current_Char := Input.Get_Char;
      exit when Current_Char in Printable_Character;
   end loop;
   --  We must unread the non blank character just read.
   Input.Unread_Char;
end Skip_Spaces;

There’s a real problem here, as the loop may indeed go beyond the expected limit. Manually raising potential problems to precondition allowed us to move potential problems up a level, which can then be fixed at the proper location, for example here by adding an additional exit condition:

loop
   exit when First_Char > 1_024;
   Current_Char := Input.Get_Char;
   exit when Current_Char in Printable_Character;
end loop;

3.4.6. Exporting/Importing User Reviews

When multiple users are reviewing messages using a local copy of the database, they can export the manual reviews (marking messages as e.g. false positives) performed locally, in order to import them later in a shared database.

In order to export all manual reviews stored in a database for a given project, you can do (on the same machine where reviews have been performed):

codepeer_bridge --output-dir=<obj dir>/codepeer/<project>.output --export-reviews=reviews.xml

This will save all manual reviews in a file called reviews.xml.

Then later on the “server” machine containing the reference database, you can copy this file and import it:

codepeer_bridge --output-dir=<obj dir>/codepeer/<project>.output --import-reviews=reviews.xml

You can also add the -v or --verbose switch to get detailed information about the actions performed, in particular when some manual reviews are already found in the reference database. The codepeer_bridge tool will make the following choices when dealing with potential conflicts between the input file and the existing database:

  • If a message id is not found in the database, emit a warning and skip the manual review.
  • If the message id has no manual review, store the review history from the input file in the database.
  • If the message id already has manual reviews then:
    • Ignore all audit trails that have the same date, status and user.
    • Add all other audit trails to the database.

See CodePeer_Bridge Tool for more details on the use of codepeer_bridge.

3.5. Text Output

You can get a compiler-like listing of messages generated by CodePeer by running codepeer with the -output-msg-only switch:

$ codepeer -Pmy_project -output-msg-only
alias.adb:10:11: high: validity check: Int is uninitialized here

The above command will run the codepeer_msg_reader utility, with the output directory as an argument. In other word, the above command is equivalent to:

$ codepeer_msg_reader codepeer/my_project.output
alias.adb:10:11: high: validity check: Int is uninitialized here

Note that you can also ask codepeer to perform its analysis and then run codepeer_msg_reader automatically via the -output-msg switch, e.g:

$ codepeer -Pmy_project -output-msg
no partitioning needed.
starting analysis
analyzed p__body.scil in 0.00 seconds
analyzed p.scil in 0.00 seconds
analysis complete.
2 .scil files processed.
updating DB and generating reports
finish updating DB at ...
alias.adb:10:11: high: validity check: Int is uninitialized here

You can also optionally specify the following codepeer_msg_reader switches:

  • -current <run_id>

    Override the current run id for generating and comparing messages. By default, the latest run is used as current. See also -cutoff.

  • -cutoff <run_id>

    Override the baseline run used to compare messages. By default, the baseline associated with the current run is used as a cutoff. See also -current.

  • -csv

    Display messages in a CSV format, suitable for use by other tools such as spreadsheets, and including more verbose information. See CSV Output for more details.

  • -hide-low

    Hide low-ranking messages.

  • -out <file>

    Specify a file where messages are output, instead of standard output.

  • -show-added

    This switch is relevant only for the default text output, where any added message (compared to the base run) will be marked with the designator [added] immediately preceding the message’s ranking. In the CSV output, the “History” column will indicate added for any added message, even when this switch is not used.

  • -show-all

    Show all message kinds. This switch is a shortcut for -show-added -show-annotations -show-info -show-removed.

  • -show-annotations

    List annotations (e.g. pre- and postconditions) computed by CodePeer, in addition to regular messages.

  • -show-backtraces

    Show backtraces associated with precondition messages, using the following syntax:

    • ; call at file1:line1:col1

      Means that the given precondition is associated with a call at file1 at line line1 and column col1.

    • ; <check kind> at file2:line2:col2

      Means that a given check associated with the precondition comes from the given source location. The list of possible values for <check kind> can be found at Run-Time Checks, User Checks and Uninitialized and Invalid Variables.

    For example, the following message:

    arr.adb:20:4: high: precondition (range check) failure on call to arr.proc1: requires Max <= -1; call at arr.adb:16:7; range check at arr.adb:8:26; range check at arr.adb:10:7
    

    read backwards means that the range checks performed at arr.adb on lines 8 and 10 are propagated as a precondition on a call at arr.adb on line 16 and finally propagated as a precondition on a call at arr.adb on line 20 where codepeer has determined that this precondition will fail.

  • -show-header

    Show a header with extra information about the run including:

    • The date and time of CodePeer run
    • The CodePeer version that has generated this report
    • The host for which CodePeer is configured (e.g. Windows 32 bits)
    • The full command-line of the CodePeer invocation, including project file
    • The CodePeer switches specified in the project file
    • The current and base CodePeer run numbers (CodePeer run numbers start at 1 and increase sequentially on each execution of CodePeer)
    • The files excluded from analysis, if specified in the project file via the Excluded_Source_Files or Excluded_Source_Dirs attributes.

    Note that strictly speaking, when combined with -csv, this switch will generate a malformed CSV file. Since using -csv and -show-header provides useful information, this combination is allowed, and it is the user’s responsibility to postprocess (e.g. split) the output if needed.

  • -show-info

    Show informational messages such as which units were analyzed, mention of any limitation encountered during analysis, or any subprograms that could not be analyzed.

  • -show-removed

    Show messages that have been removed from the current run relative to the baseline (see CodePeer Baselines), interspersed with the normally displayed messages. When generating text messages, any removed messages will be marked with the designator [removed] immediately preceding the message’s ranking. In CSV message output, the “History” column will indicate removed for any removed messages.

  • -show-reviews

    List also messages reviewed manually, with additional review information about current review status, date of review, name of reviewer and associated comment.

  • -show-reviews-only

    List only messages reviewed manually, with a history of each review containing: review status, date of review, name of reviewer and associated comment.

For example, you can retrieve information about the added and removed messages of the current run in a CSV file via:

codepeer -Pprj -output-msg-only -show-removed -csv -out messages.csv

3.6. Report File

If you want CodePeer to generate a report file in text format, you can combine some or all of the following switches, either on the command line or via CodePeer’s switches in the project file:

  • -output-msg
  • -out <report file>
  • -show-header
  • -show-info
  • -show-added
  • -show-removed
  • -show-reviews

See Text Output for details on each of these switches.

For instance if you specify in your project file:

package CodePeer is
   for Switches use
     ("-level", "max", "-output-msg", "-out", "report_file.out", "-show-header", "-show-info");
end CodePeer;

The -out switch allows you to specify an output file when generating messages with -output-msg, instead of standard output which is the default.

Here is an example of a report file produced with the above switches:

date              : YYYY-MM-DD HH:MM:SS
codepeer version  : 18.2 (yyyymmdd)
host              : Windows 64 bits
command line      : codepeer -P my_project.gpr
codepeer switches : -level max -output-msg -out report_file.out -show-header -show-info
current run number: 4
base run number   : 1
excluded file     : /path/to/unit3.adb

unit1.ads:1:1: info: module analyzed: unit1
unit1.adb:3:1: info: module analyzed: unit1__body
unit2.adb:12:25: medium: divide by zero might fail: requires X /= 0
[...]

3.7. CSV Output

You can use the -csv switch via e.g. codepeer -output-msg[-only] -csv to generate messages in a CSV format, suitable for use by other tools such as spreadsheets, for example:

codepeer -Pprj -output-msg-only -csv -out messages.csv

Will generate a file messages.csv with the following contents: first a heading listing all columns:

File,Line,Column,Category,History,Has_Review,Ranking,Kind,Message,Classification,CWE,Checks,Primary_Checks,Subp,Timestamp,Approved By,Comment,Message_Id

then one line for each message, with each field separated by a comma, e.g:

f.adb,71,7,dead code,added,FALSE,low,warning,"dead code because F > 0",uncategorized,"561","","",func,,,6

The columns available are:

Name Description
File Full pathname of the file containing the message
Line Line number of the message
Column Column number of the message
Category Category of the message, as listed in Description of Messages and Description of Annotations.
History added if message is new relative to the baseline, removed if message has been removed relative to the baseline, and unchanged otherwise, see also CodePeer Baselines
Has_Review TRUE if the message has been reviewed manually, FALSE otherwise
Ranking Ranking of the message (annotation, info, low, medium, high, suppressed)
Kind Message kind (check, warning, info, annotation)
Message Text of the message, surrounded by double quotes
Classification Review classification (uncategorized, pending, false_positive, not_a_bug, bug)
CWE List of relevant Common Weakness Enumeration ids for the given message. See Description of Messages for more information about CWE.
Checks Only relevant for Precondition_Checks. This is a list of checks that contributed to the associated precondition.
Primary_Checks Same as above. The implied checks are omitted from the list.
Subp Fully qualified name of the enclosing subprogram
Timestamp Timestamp of the last review, if relevant.
Approved By Name of the review approver, if relevant.
Comment Last review comment, if relevant.
Message_Id Unique identifier of the message.

3.8. Security Report

You can ask CodePeer to perform a security oriented analysis and generate a report in HTML format via the -security-report switch, which can be combined with other switches, in particular -level. For example:

codepeer -Pprj -level 3 -security-report

This switch will perform a codepeer analysis focused on the following potential vulnerabilities (see Description of Messages for more details):

  • unprotected access
  • unprotected shared access
  • mismatched protected access
  • validity check
  • dead code
  • test always false
  • test always true
  • unused assignment
  • unused out parameter
  • useless self assignment
  • loop does not complete normally

and will generate in the codepeer directory a static HTML file called codepeer-security-report.html, along with two CSS files called basic.css and sphinxdoc.css.

This HTML file contains the following sections:

  1. Potential Vulnerabilities Detected by CodePeer
  2. Security Vulnerabilities Not Present
  3. List of Ada Source Files Analyzed by CodePeer

3.9. Saving CodePeer Results in Configuration Management

This section describes the various options available to store CodePeer results in a Configuration Management tool, such as subversion or git so that this information can be retrieved at any point.

All the information concerning messages, annotations, and user comments is stored in the CodePeer database (file xxx.db/Sqlite.db, where xxx.db is the directory specified by the Database_Directory attribute in your project file, and is <project>.db by default when using GPS). So if you want to be able to perform new runs on top of previous ones, this is the directory that you need to save.

In addition, in order to display run results for an existing database without re-running CodePeer, the following files in the xxx.output directory are needed:

  • Inspection_Info.xml
  • Output_Info.xml
  • race_conditions.xml

From these files, you can regenerate output in various formats (HTML, text listings, GPS and GNATbench output) using the -output-msg-only codepeer switch from the command line. The above files can also be used to display the CodePeer report using the menu Display Code Review.

There are also some files under the codepeer directory that are used as a cache by GPS for speeding up Display Code Review:

  • inspection_data.xml
  • review_status_data.xml

As well as a related subdirectory under the xxx.output directory, where information about CodePeer annotations are stored for GPS use in displaying annotations in the source editor:

  • annotations

The above two GPS-related XML files can be saved to ensure faster code review later after the files have been restored, when using the option suggested below where files are saved under configuration management. In order to be effective, it’s important to restore the timestamps of these files, or at least to ensure that the files’ timestamps are more recent than that of the database (see Copying Remote Results Locally). The annotations subdirectory can be saved as well, but need not be if display of annotations isn’t needed. (Be aware that the copying of this subdirectory can slow down the execution of codepeer_bridge when the subdirectory contents are large.)

Note that the database file does NOT contain source files, so when running codepeer to e.g. regenerate the HTML output or display sources in GPS, you also need to provide access to the corresponding version of the source files.

In addition, the database is a historical database, which means it contains all previous analysis performed and allows for comparisons and changing the baseline (see CodePeer Baselines).

So one option is to save all these files as well as the corresponding source version number of your CM system.

Another option is to save in configuration management (instead or in addition) the various output produced by CodePeer (e.g. the HTML output, and/or the text output). See HTML Output and Text Output for more details.

A third option is to use the -output-msg-only or -output-msg CodePeer switches to generate and save the raw messages. For example the following command:

codepeer -P<project> -output-msg-only

will list all messages generated by CodePeer in a compiler like format. This output can be redirected to a file and put under configuration management.

See also Text Output for more relevant switches when using codepeer_msg_reader, in particular the -show-annotations and -show-manual-reviews switches.

Using -output-msg-only you can save in text format all the information you’d like to keep in configuration management.