3. How to View CodePeer Output¶
We recommend using GNAT Studio to view CodePeer messages, because this IDE provides the best interactive experience and powerful source-navigation capabilities not available in, for example, the HTML report, which can help review and understand messages more easily.
3.1. Viewing CodePeer Output in GNAT Studio¶
If your compiler is not GNAT, make sure you have properly setup your project file, as explained in Configuring IDEs to Drive Tools.
3.1.2. CodePeer Report Window¶
When you open a CodePeer report (using one of the CodePeer menu items, see Running CodePeer from GNAT Studio), the locations view is filled with messages from the CodePeer run, and the following report window is displayed:
On the top right of the report window the run id of the current (latest) run is displayed, and on the top left either the run id of the baseline run of the current analysis level or the run id of the cutoff run if one was specified (respectively Current run #2 and Base run #1 in the screenshot, here the current run is the second run at this analysis level). Below the run ids you will also get the date and time when CodePeer performed the review. If you place your mouse above this part, a tooltip will display the CodePeer switches that were used for the current run. This can be useful to remind you of any specific option used, like the level of the analysis.
The report window has two tabs: Messages and Race conditions. The first tab provides a summary of all messages and filtering capabilities, and the second provides a summary of potential race conditions found.
On the left side of the messages window, there is a main area listing all files for which messages have been generated, with information organized hierarchically by project, file and subprograms. You can click on any file to display the first message on this file in the locations view. See Using the Locations View for more details on the use of the locations view.
Similarly, you can double click on any file or subprogram to jump to the corresponding source editor, which will be displayed with their corresponding CodePeer annotations.
For each of these entities, three columns displaying the number of high, medium and low messages corresponding to the current filter selection (see Using Filters).
In addition for projects and source files, two additional columns are available, displaying the Passed checks (number and percentage of checks that CodePeer has analyzed automatically without finding any potential run-time error) and the Total checks (total number of checks verified by CodePeer). The number of checks does not count the number of warnings. Thus, for clarity, at level 0, the columns Passed checks and Total checks are hidden.
Different kinds of filters are available on the right side of the window. Clicking on each of these filters will show/hide the corresponding messages in the locations view. This way, you can easily concentrate on e.g. high ranking messages only, or on a specific category (e.g. validity check) during your review of the messages.
The filters are divided into four kinds:
This section (the first column on the right of the file summary) lists all the message categories found in the current analysis in two groups: warnings and checks. In addition, it lists all CWEs found in the current analysis when displaying of CWEs is enabled. By default, all categories found are enabled, and you can select/unselect all categories in each group at once by clicking on the check box left to the Warning categories (or Check categories, or CWE categories) label, or individually by checking each item. See Description of Messages for more details on the meaning of each category.
CWE categories filter applies for all kinds of messages. When some CWE is selected in filter all messages with this CWE are displayed (even when not selected in Warning categories and Check categories filters). Precondition messages are displayed when related checks has selected CWE.
By default, added and unchanged messages are displayed (with respect to baseline). You can also select old removed messages that are no longer present in the last run, or concentrate on added messages only and ignore unchanged messages. This is particularly useful when using CodePeer on legacy code, without having to review previously found messages, and concentrate on messages found after new changes, to analyze the impact of these new changes.
By default, GNAT Studio only displays the most interesting messages (ranked medium and high). Once you have reviewed these messages, reviewing the low messages can be useful. It can also help understanding some messages to display the low and/or informational messages, which can provide additional information.
Message review status
By default, GNAT Studio displays all the messages except for the ones that have a review of review status category Not_A_Bug (i.e. with review status not a bug, false positive, intentional, or any Custom Review Status of this category). You can change this default setting by checking or unchecking the corresponding boxes.
On the top of the race conditions window, there is a list of shared objects. Clicking on shared object will open a list of entry points in the bottom of the race conditions window and automatically scroll the Locations window to simplify access to locations where shared objects are used.
On the bottom side of the race conditions window, a list of entry points and kind of access is displayed for the currently selected shared object. Clicking on row of this view opens source editor at scroll it to point of access to shared object.
3.1.3. Using the Locations View¶
When you open a CodePeer report (see Running CodePeer from GNAT Studio), the locations view at the bottom part of GNAT Studio is filled with messages from the CodePeer run. For example:
You can click on any of these messages, and GNAT Studio will open a source editor containing the relevant source file, at the listed source location. You can also use the Filter panel available from the contextual menu in the locations view in order to display only messages containing a specific text pattern.
When a precondition related message is displayed, extra information may be provided by CodePeer as secondary locations associated with the message, giving source locations that are related to the precondition check such as calls forming a call stack, as well as actual checks (e.g. range check) related to the precondition message. Each of these secondary locations can be clicked to see the corresponding source code and ease understanding these messages. In other words, these secondary locations are places where CodePeer learned about the range of values of the variables or expressions mentioned in the message, and in particular, provides the call stack that would lead to the selected potential precondition failure.
This capability replaces the Backtraces view which was provided in previous versions.
For more details on how to use the locations view, see the GNAT Studio documentation directly, which explains how this view is managed.
In addition, an Edit icon is displayed in front of each CodePeer message. Clicking on this icon allows posting a manual review of the message. It is possible to review single or multiple messages at once. See Reviewing Messages for more information about use of message review dialogs.
3.1.4. Reviewing Messages¶
There are two forms of CodePeer message review dialog: one to review single messages and another to review multiple messages. Which dialog will be used depends on how many messages are selected in the Locations view before clicking on the edit icon.
The New Status drop-down box allows selecting the review status of the message. The review status is initially set to Uncategorized and can be set to Pending, Not A Bug, Intentional, False Positive, Bug or any Custom Review Status.
The Approved By text box allows the name of the reviewer to be recorded.
The Comment text box allows the reviewer to enter an explanation/justification for the message.
The single-message review dialog displays a history of the changes related to that message, while the multiple-message review dialog displays a list of messages to be reviewed along with their ranking and review status.
3.1.5. Reviewing CodePeer Annotations¶
CodePeer generates as-built documentation for each subprogram that it analyzes. This documentation is presented in the form of virtual comments at the beginning of each subprogram in a source-editor window. This as-built documentation includes annotations in the form of Preconditions, Presumptions, Postconditions, etc. which characterize the functioning of the subprogram, as determined by a global static analysis of the subprogram and the routines that it calls, directly or indirectly. For more details on the form of these annotations, see Description of Annotations. For more details on using these annotations as part of code review, see Use Annotations for Code Reviews.
The virtual comments are not actually added to your source code; they are only visible when viewing the source in a GNAT Studio source-editor window. You may hide the annotations using theitem in the source editor’s contextual menu. You may display them again using the contextual menu item .
3.1.6. Using the source editor’s tooltip to display values¶
Tooltips in the GNAT Studio source editor display the set of possible values for objects under the mouse cursor, if this information is available.
This can be disabled by theitem in the source editor’s contextual menu. When disabled, GNAT Studio displays the default contents of the tooltip.
3.1.8. Accessing Results Remotely (IDE Server)¶
CodePeer results can be accessed remotely by using the CodePeer IDE Server, which is implemented as a standalone web server underneath.
Note that the same project file should be used to run the analysis, to run the IDE server and GNAT Studio to review results.
The CodePeer analysis is performed on a server, and the CodePeer IDE Server is run on it after analysis:
$ codepeer -P<project> --ide-server --port=8080
The default port is 8080, so you can omit this optional value.
To stop the server you need to press Ctrl-C, which will clean up the lock file and cleanly shut down the server.
Note that the CodePeer database is locked when the server is running. You need to stop it before doing a new analysis.
Each developer does a check out of the sources of the project locally and use GNAT Studio to display analysis information and to review messages.
The project file should provide the URL to access to CodePeer server using
package CodePeer is for Server_URL use "http://server:8080"; end CodePeer;
server should be replaced by the host name running the CodePeer
Note that when specifying a
Server_URL, GNAT Studio switches to a review
mode for CodePeer results, and will disable menus to perform a new CodePeer
analysis since performing a CodePeer analysis in remote mode is not supported
(the run needs to be performed directly on the server).
3.1.10. Copying Remote Results Locally¶
CodePeer results can also be retrieved on a desktop machine. In order to copy the files locally, each user (or a script) needs to:
codepeer_bridgeon the server machine as follows:
$ codepeer_bridge --output-dir=<path/to/obj dir>/codepeer/<project>.output
where <path/to/obj dir> is the absolute path to the Object_Dir value specified in the project file and <project> is the base name of the project file in lower case, as it appears in the CodePeer output directory (for example project when the output directory is project.output).
This will generate two XML files needed to view CodePeer results locally:
transfer, preserving the timestamps, the following files and directories from the
<obj dir>/codepeerdirectory on the server to the desktop machine:
It is critical that the timestamps of the XML files above are strictly more recent than the timestamp of
Sqlite.dbcontained in the
launch gnatstudio -P <project>.gpr and then useto display the CodePeer output, and navigate through messages and review them.
3.2. HTML Output¶
3.2.1. HTML generation¶
In order to generate HTML output, you need to specify either
For example, in order to perform both a CodePeer analysis and generate HTML:
$ codepeer -Pprj -html
If you have already performed a CodePeer analysis, then use the following command to generate HTML only without performing a new analysis:
$ codepeer -Pprj -html-only
Note that all the switches relevant to
-output-msg can also be used for
HTML generation. See Text Output for more details. For example you
$ codepeer -Pprj -html -show-all -cutoff 3
In order to view the HTML output, you first need to launch the CodePeer web server.
3.2.2. Viewing HTML output¶
CodePeer comes with a web server, which must be run after an analysis and HTML generation to serve the HTML Output and to allow users to manually review messages. You can run the CodePeer web server as follows:
$ codepeer -Pprj --web-server --port=8080
The default port is 8080, so you can omit this optional value.
You can then open a browser and specify a URL of the form
http://localhost:8080/. Note that using
localhost will only work if
you are using the same machine for the web server and for the web browser.
Otherwise you will need to replace localhost with the name of the machine
where the web server is launched.
To stop the web server you need to press Ctrl-C, which will cleanly shut down the server.
For more details on the HTML interface, see the documentation of GNATdashboard web interface.
3.2.3. Location and Format of Output¶
The HTML output is generated by a tool called
gnathub, part of your
CodePeer installation, in a directory called
located under the object directory of your project. Note that this
HTML output cannot be viewed directly and must be served by
the CodePeer web server as described in Viewing HTML output.
3.2.4. Message Review¶
In order to review messages manually and provide audit information, you can
select messages in the
File messages or
All messages area at the bottom
of the source view. Clicking on one message will select it. You can then
further extend the selection by pressing the Ctrl key and clicking.
Once you’ve selected all the messages that you want to review at the same
time, click the
Add review button.
A Message Review window will appear. The top area displays all the messages selected, with their current status.
The New status drop-down box allows selecting the new review status of
selected messages. The review status is initially set to
can be set to
Not A Bug,
Bug or any Custom Review Status.
The Approved by text area allows the name of the reviewer to be recorded.
The Comment text area allows the reviewer to enter an explanation/justification for a message (e.g., explain why the construct flagged by the message is not actually a bug).
The Save button will save this review information in the database via the web server.
3.3. Viewing HTML Output in Jenkins¶
To view the HTML Output in Jenkins, you need to install the HTML Publisher plugin. Check the documentation here.
From the project configuration view, setup your Analyse with CodePeer build, see Running CodePeer from Jenkins for more details. To be able to see the HTML report, you need to check the Generate HTML report box.
Then, add a “post build action” by selecting.
HTML directory to archive
Add the path to
gnathub/html-reportthat is located inside the object directory defined in your project file.
After saving configuration, and running build once. The HTML report is available from the Jenkins dashboard.
Note that you have access to the report in offline mode, meaning that you have a read only view of the report with no manual review capability.
Because of the Content-Security-Policy (CSP), to be able to see the report, you will have to run the Jenkins server with the following option:
-Dhudson.model.DirectoryBrowserSupport.CSP="sandbox allow-same-origin allow-scripts;
default-src 'self'; script-src * 'unsafe-eval'; img-src *; style-src * 'unsafe-inline'; font-src * data:"
For more details, you can take a look at Jenkins Security Policy.
3.4. Viewing CodePeer Output in GNATbench¶
3.4.1. CodePeer Problem Details View¶
When you run CodePeer (using one of the CodePeer menu items, see Running CodePeer from GNATbench), the CodePeer Problem Details view is filled with messages from the CodePeer run, and the following view is displayed:
The messages generated by CodePeer are shown in this view. The messages are associated with source files. For each filename, the message shows the line and column number, message ranking (high/medium/low) and a brief description of the problem. If you click on the message in the CodePeer Problem Details view, the corresponding message will be highlighted in the editor window.
Different kinds of filters are available on the top right side of the bottom window. Click on the View menu icon (inverted triangle icon) and then select Filter By submenu. You will see the following list of filters:
High ranking messages
Medium ranking messages
Low ranking messages
Clicking on each of these filters will show the messages of that ranking in the CodePeer Problem Details view. At present the filters support filtering the messages by message ranking (high/medium/low) and showing messages corresponding only to warnings.
In the editor window the problems are identified by special markers (red flag for high ranking messages, orange flag for medium ranking messages and yellow flag for low ranking messages). If you hover over these markers, you can see the details of the message.
3.4.2. Reviewing CodePeer Messages in GNATbench¶
To review CodePeer messages, select the message in the CodePeer Problem Details View by double clicking on the message. A dialog window titled “CodePeer Message Review” would pop up. You can change the status of the message from the “New Status” pull down menu. Also you can add the name of the person reviewing the message in the “Approved By” text field and can add comments in the “Comment” text box. You can modify any of the fields. To save the changes, press OK. Below is the screen shot of the “CodePeer Message Review” screen.
You can see a history of the changes related to that message in the “Audit trail” text area.
3.4.3. CodePeer Annotation Details View¶
When CodePeer finishes its analysis, it also creates annotation markers for the sources. The editor ruler will show the annotation markers on the first line of every method. Clicking on an annotation marker, brings up the CodePeer Annotation Details view which looks as shown below.
This view displays the preconditions and postconditions that characterize a method. It also shows any unanalyzed calls for the method, and any presumptions made about their effects.
3.4.4. Race Condition Details View¶
When the user right clicks on a project and selectsmenu,it opens the Race Condition Details view in the bottom pane.
There is a table of potentially shared objects on the bottom left side of the view. Clicking on a shared object will update the “Access Summary” table and “Access Details” tree view for the selected object. By default both “Access Summary” table and “Access Details” tree show the entries for the first object in the “Potentially Shared Objects” table.
The “Access Summary” table lists the entry points and kind of accesses for the currently selected object.
The “Access Details” tree view lists all the references to the object organized by access type (read/update). It also lists the filename and line number where the object is accessed. Clicking on an entry in the “Access Details” view will highlight the particular line in the file in the editor window. The Race Condition Details view looks like as shown below:
All the tables in this view can be resized.
3.5. Reviewing Messages¶
User review of a message generated by CodePeer may determine either that the message is a false positive (i.e., the error situation CodePeer is warning about cannot actually occur) or is not an error (e.g., numeric overflow might be intentionally raised in some situations). In either case, such a message does not indicate a potential error in the code that requires further investigation. It is often useful to preserve the results of such review.
CodePeer provides two different mechanisms for capturing this information and associating it with the message in question.
The first mechanism allows interactively reviewing messages via either:
The HTML output as described in Message Review
The GNAT Studio IDE (see Reviewing Messages)
The GNATbench IDE (see Reviewing CodePeer Messages in GNATbench)
The CSV output (see Through a CSV File)
The second mechanism consists of adding annotations in the form of pragmas to the Ada source code being analyzed as described in Through Pragma Annotate / Justification in Source Code.
Each approach has its pros and cons.
Advantages of interactive review include:
No source code modifications are required; frozen source code can be reviewed.
Review does not perturb line numbering of sources, which in turn can affect the text of other messages.
Review can be performed by people not familiar with modifying Ada source code.
Review status values other than False_Positive and Intentional are available (e.g., Pending as well as Custom Review Status).
Advantages of adding pragmas to the source include:
Review is integrated with the sources and easier to relate to the sources.
Review is less likely to be invalidated by other source changes; the mapping from the review to the message being reviewed is more straightforward.
Existing editing and version control tools can be used to create and manage reviews.
The different techniques can be mixed, even within a single Ada unit.
In addition, it is also possible and sometimes desirable to improve the code specification to clarify the intent of the code and at the same time eliminate false alarms, as explained in Improve Your Code Specification.
When reanalyzing a source code, CodePeer classifies messages as unchanged, added, or removed with respect to the baseline run and it will preserve the existing reviews for unchanged messages. See Classifying messages as unchanged, added, and removed for more details.
3.5.1. Through CodePeer Web Server and HTML Output¶
As described in Message Review, users can justify CodePeer messages from the HTML output, when accessing it by Viewing HTML output. The web server ensures the integrity of the SQLite database when serving multiple users.
3.5.2. Through GNAT Studio and CodePeer IDE Server¶
As described in Using the Locations View, users can justify CodePeer messages from GNAT Studio, when accessing it by Accessing Results Remotely (IDE Server). The IDE server ensures the integrity of the SQLite database when serving multiple users.
3.5.3. Through Pragma Annotate / Justification in Source Code¶
By adding in your Ada source code a pragma Annotate of the form:
pragma Annotate (CodePeer, False_Positive|Intentional, "<check name>", "<review message>");
CodePeer will no longer display the corresponding message by default and will instead record a manual review in the database during the following run of CodePeer on the updated source code.
Note that this pragma is ignored by the compiler when generating code,
it only affects CodePeer’s handling of generated messages, so you can safely
add it to your source code without impacting compilation or runtime effect.
If for some reason pragma Annotate would be both recognized and not handled
the same way by your Ada compiler, then you can safely replace
pragma Annotate by
When used in this way, an Annotate pragma takes exactly four arguments:
The identifier CodePeer.
One of two identifiers: False_Positive or Intentional.
False_Positive indicates a situation where the condition in question cannot occur but CodePeer was unable to deduce this
Intentional indicates that the condition can occur but is not considered to be a bug.
A string literal matching one of the message kinds listed in the tables presented in Description of Messages.
A string literal which is used as the comment associated with the review of this message in the database.
The placement of the pragma in the source determines the messages (of the kind specified by the third argument) that it applies to. The pragma applies to messages associated with the preceding item in a statement or declaration list (ignoring other Annotate pragmas); if no such preceding item exists, then the pragma applies to messages associated with the immediately enclosing construct (excluding any portion of that construct which occurs after the pragma).
For a message saying that a subprogram always fails, the pragma can be placed either after the definition of the subprogram or at the start of the declaration part of the subprogram.
For the following example:
procedure Throw_Exception (Progr_Error : Boolean) is begin if Progr_Error then raise Program_Error; else raise Constraint_Error; end if; end Throw_Exception;
CodePeer generates the following message:
throw_exception.adb:1:1: medium warning: subp always fails throw_exception always ends with an exception or a non-returning call
One way to handle this situation is to justify the message by adding an Annotate pragma as follows:
procedure Throw_Exception (Progr_Error : Boolean) is pragma Annotate (CodePeer, Intentional, "subp always fails", "reviewed by John Smith"); begin if Progr_Error then raise Program_Error; else raise Constraint_Error; end if; end Throw_Exception;
A better solution to this problem would be to use the pragma No_Return. Applying this pragma to the procedure Throw_Exception will prevent the display of the “subprogram always fails” message and in addition to this, it will provide a compiler check that there is no control-flow path that can reach the “end” of the procedure.
The message saying that a subprogram always fails may be emitted because of one or more messages saying that some errors always happen in the subprogram. Note that in this case, the “subprogram always fails” message must be explicitly justified by a dedicated pragma Annotate. The following example shows that to justify the message that a subprogram always fails, it is not enough to just justify the message about error(s) in the subprogram:
procedure Justified_Error_Inside (Progr_Error : Boolean) is begin if Progr_Error then raise Program_Error; end if; pragma Assert (Progr_Error); -- Justified error inside subprogram pragma Annotate (CodePeer, Intentional, "assertion", "reviewed by John Smith"); end Justified_Error_Inside;
CodePeer still generates the message indicating the subprogram always fails:
justified_error_inside.adb:1:1: medium warning: subp always fails justified_error_inside fails for all possible inputs
Similarly, for the following example:
function Func return Integer is X, Y : Integer range 1 .. 10000 := 1; begin for I in 1 .. 123 loop X := X + ((3 * I) mod 7); Y := Y + ((4 * I) mod 11); end loop; return (X + Y) / (X - Y); end Func;
CodePeer generates the following message:
func.adb:8:24: medium: divide by zero might fail: requires X - Y /= 0
As it happens, this message is a false positive; the function will always safely return -4, but CodePeer is unable to deduce this fact.
One way to handle this situation is to justify the message by adding an Annotate pragma.
Consider adding a pragma as follows:
function Func return Integer is X, Y : Integer range 1 .. 10000 := 1; begin for I in 1 .. 123 loop X := X + ((3 * I) mod 7); Y := Y + ((4 * I) mod 11); end loop; return (X + Y) / (X - Y); pragma Annotate (CodePeer, False_Positive, "Divide By Zero", "reviewed by John Smith"); end Func;
With this modification, CodePeer displays no message for this example.
In addition, if manual reviews are displayed (for example, if
codepeer is invoked with the -output-msg-only -show-reviews-only
switches), the following is displayed:
func.adb:8:24: suppressed: divide by zero might fail: requires X - Y /= 0 Review #1: false_positive: Approved by Annotate pragma at 9:4: reviewed by John Smith
CodePeer emits a warning when an Annotate pragma does not have any effect. It is possible to hide these warnings with the switch -no-unused-annotate-warning.
3.5.4. Through Pragma Annotate / Modified in Source Code¶
Pragma Annotate can also be used in another way to help cope with the problem of incorrect CodePeer message generation. Suppose, for example, that CodePeer somehow becomes confused about the value of a variable at some point in a program and consequently generates incorrect messages. Pragma Annotate / Modified can be used to tell CodePeer to assume that (at the point where the pragma is placed) a given object is fully initialized and that nothing else should be assumed about the value of that object; any previous deductions CodePeer may have made about the value of the object at the point of the pragma should be discarded. Note that this means that CodePeer can issue false positive messages stemming from the fact that it assumes that the object can have any value after at the point of the pragma.
This is illustrated by the following examples:
procedure Proc1 is begin -- Before the call to Initialize_Global_Variable_1, -- Global_Variable_1 is correctly known to be uninitialized. Initialize_Global_Variable_1; -- Suppose that Global_Variable_1 is initialized by the preceding -- call, but for some reason CodePeer fails to realize that and -- therefore incorrectly assumes that Global_Variable_1 remains -- uninitialized after the call. The pragma corrects this mistaken -- assumption. pragma Annotate (CodePeer, Modified, Global_Variable_1); -- Because of the pragma, no false positive message about reading -- an invalid value will be generated when Global_Variable_1 is -- passed as a parameter in the following call. Some_Procedure (In_Mode_Parameter => Global_Variable_1); end Proc1;
procedure Proc2 (Count : in out Natural) is begin Global_Variable_2 := 0; Update_Global_Variable_2; -- Suppose that Global_Variable_2 is updated by the preceding call, -- but for some reason CodePeer fails to realize that and -- therefore incorrectly assumes that Global_Variable_2 still has the -- value zero after the call. pragma Annotate (CodePeer, Modified, Global_Variable_2); -- Because of the pragma, the assignment to Count will not be -- incorrectly identified as dead code. if Global_Variable_2 /= 0 then Count := Count + 1; end if; end Proc2;
When used in this way, an Annotate pragma takes exactly three arguments:
The identifier CodePeer.
The identifier Modified.
A name denoting a stand-alone object or a parameter.
Such an Annotate pragma is subject to the same placement restrictions as an Assert pragma (roughly speaking, such a pragma shall only occur within a statement list or a declaration list).
3.5.6. Through a CSV File¶
Another possibility for users to review messages is to export the messages via the CSV Output and then edit in e.g. a spreadsheet the CSV file to add manual reviews.
In order to store a review in a CSV file, you will need to fill the following columns (fields) for a given row (message):
This field must be set to
This field needs to be set to one of the following values:
This field needs to be set to one of the following values:
This is an optional field to set the date of the review.
This field will be set automatically, if left empty,
with the date of the
This is an optional field to set the name of the review approver.
This is an optional field to set the review comment.
The other fields must be kept unchanged.
Once the CSV file has been modified and saved (in the same CSV format),
you can use
codepeer_bridge on the server machine where CodePeer is
running and where the CodePeer database is located, e.g:
codepeer_bridge -v --output-dir=<obj dir>/codepeer/<project>.output --import-reviews=<file.csv>
See Exporting/Importing User Reviews for more details on the
3.5.7. Custom Review Status¶
It is possible to define your own review statuses for special needs. In order
to define one you first need to select a review status category. There are
Bug. The review status
category impacts the behavior of the user interface: by default, CodePeer hides
(in textual and html output as well as GNAT Studio) any messages that have a review
with a review status of category
Not_A_Bug. In addition, in the location
view GNAT Studio highlights the messages with a review, using one particular color per
review status category.
Once you have selected a category, you can define your custom review status in
your project file with the project attribute corresponding to the selected
project Proj is package CodePeer is for Pending_Status use ("To Do", "Don't know"); for Not_A_Bug_Status use ("To Be Dealt With Later", "Don't care"); for Bug_Status use ("To be fixed ASAP", "Problem"); end CodePeer; end Proj;
The review statuses
bug each have as review
status category the category with the same name, while the review statuses
false positive and
intentional both have the review status category
Note that custom statuses currently can only be set from the GNAT Studio interface.
3.5.8. Improve Your Code Specification¶
In some cases, it may be better to annotate the Ada code directly, providing additional information to help going further in the analysis and help also clarify the intent of the code.
Ada provides various ways to improve the specification of the code and thus allowing static-analysis tools to reason on additional information. For example:
explicit range given on types, subtypes and variables
explicit assertions written in the code via
pragma Assertor pragma
explicit annotations written in the code via pragma Annotate / Modified
pre and post conditions
predicates and invariants
220.127.116.11. Using pragma Assert and Assume¶
For both pragmas Assert and Assume (as with any run-time check), analysis of
subsequent code assumes the success of the preceding run-time check.
pragma Assume can, therefore, be used to prevent generation of unwanted
messages about subsequent code. No message will be generated about a possible
division by zero in the following example (provided that CodePeer is unable to
deduce that failure of the assumption is a certainty):
declare Num : Natural := Some_Function; Den : Natural := Some_Other_Function; pragma Assume (Den /= 0); Quo : Natural := Num / Den; begin ... end;
In this respect,
pragma Assume can be viewed as an alternative
pragma Annotate (CodePeer, False_Positive, ...);. It is important to
be aware of two important differences between these two approaches to
avoiding unwanted messages. First, an Assume pragma (unlike an Annotate
pragma) can affect the run-time behavior of a program - an Annotate pragma
has no effect on run-time behavior. Second, an incorrect Assume pragma
can invalidate CodePeer’s analysis, possibly to an extent that is not obvious,
while an Annotate pragma has no effect on CodePeer’s analysis of subsequent
constructs. The user-visible effects (with respect to both run-time behavior
and to CodePeer’s analysis) of an Assume pragma are more like those of an
Assert pragma followed by an Annotate pragma that suppresses any message
about the possibility that the assertion might fail.
18.104.22.168. Using Pre and Post conditions¶
Taking the GNAT Studio demo located under
<GNAT Studio install>/share/examples/gnatstudio/demo as an example, and
running it under CodePeer via:
codepeer -P demo -level 1 -output-msg
this will generate in particular a potential array index check failure. Instead of hiding the error, we’re going to extend the contract of the subprogram Get_Char so that we can make sure the callers are correct.
First, let’s look more closely at the subprogram itself:
function Get_Char return Character is C : Character; begin -- First check if the line is empty or has been all read. if End_Line then Read_New_Line; end if; C := Line (First_Char); First_Char := First_Char + 1; return C; end Get_Char;
The potential error occurs on the access to the element of line, where First_Char might not be smaller or equal to 1024. So instead of removing this error, we’re going to provide as a precondition of Get_Char the fact that the value has to be within expected range:
function Get_Char return Character; pragma Precondition (First_Char <= 1_024);
Note that the above is using a pragma so that this syntax can be used with any Ada compiler (the pragma will basically be ignored by non GNAT compilers).
If you are using an Ada 2012 compiler then the following syntax is preferred:
function Get_Char return Character when Pre => First_Char <= 1_024;
Re-running CodePeer will remove the error. Now
First_Char is assumed to be below 1024 and thus there is no potential error.
Note that this pragma doesn’t have any effect in the executable code by default
- it’s just here for documentation purposes and used by the additional tools.
However, when using the GNAT compiler, a corresponding dynamic check can be
added when e.g. compiling with the
It would be interesting to understand the effects of this precondition in its caller. At level 1, the analysis does not take any call graphs into account, so no additional problems are spotted. However, when increasing the level of analysis, specifically to level 2, an additional potential run-time error will be spotted on this piece of code:
procedure Skip_Spaces is Current_Char : Character; begin loop Current_Char := Input.Get_Char; exit when Current_Char in Printable_Character; end loop; -- We must unread the non blank character just read. Input.Unread_Char; end Skip_Spaces;
There’s a real problem here, as the loop may indeed go beyond the expected limit. Manually raising potential problems to precondition allowed us to move potential problems up a level, which can then be fixed at the proper location, for example here by adding an additional exit condition:
loop exit when First_Char > 1_024; Current_Char := Input.Get_Char; exit when Current_Char in Printable_Character; end loop;
3.5.9. Exporting/Importing User Reviews¶
When multiple users are reviewing messages using a local copy of the database, they can export the manual reviews (marking messages as e.g. false positives) performed locally, in order to import them later in a shared database.
In order to export all manual reviews stored in a database for a given project, you can do (on the same machine where reviews have been performed):
codepeer_bridge --output-dir=<obj dir>/codepeer/<project>.output --export-reviews=reviews.xml
This will save all manual reviews in a file called
Then later on the “server” machine containing the reference database, you can copy this file and import it:
codepeer_bridge --output-dir=<obj dir>/codepeer/<project>.output --import-reviews=reviews.xml
You can also add the
--verbose switch to get detailed information
about the actions performed, in particular when some manual reviews are already
found in the reference database. The
codepeer_bridge tool will make the
following choices when dealing with potential conflicts between the input file
and the existing database:
If a message id is not found in the database, emit a warning and skip the manual review.
If the message id has no manual review, store the review history from the input file in the database.
If the message id already has manual reviews then:
Ignore all audit trails that have the same date, status and user.
Add all other audit trails to the database.
See CodePeer_Bridge Tool for more details on the use of
3.6. Text Output¶
You can get a compiler-like listing of messages generated by
CodePeer on its previous run by running it with the
$ codepeer -Pmy_project -output-msg-only alias.adb:10:11: high: validity check: Int is uninitialized here
Note that you can also ask CodePeer to perform its analysis and then
output messages automatically via the
-output-msg switch, e.g:
$ codepeer -Pmy_project -output-msg no partitioning needed. starting analysis analyzed p__body.scil in 0.00 seconds analyzed p.scil in 0.00 seconds analysis complete. 2 .scil files processed. updating DB and generating reports finish updating DB at ... alias.adb:10:11: high: validity check: Int is uninitialized here
This listing can be filtered by using any of the switches listed in
-output-msg switches. In particular, you can use
-level to explore any old run messages, and/or use
-cutoff to compare the displayed run with a run of your choice, ignoring
3.7. Report File¶
If you want CodePeer to generate a report file in text format, the
-out switch allows you to specify the file in which the messages
emitted by option
-output-msg will be output. (Instead of standard
output, which is the default.) Additionnally, the
switch can be used to obtain a trace of the settings used for the
analysis, for traceability purposes. Consider the following project
package CodePeer is for Switches use ("-level", "max", "-output-msg", "-out", "report_file.out", "-show-header", "-show-info"); end CodePeer;
Here is an example of a report file produced with the above switches:
date : YYYY-MM-DD HH:MM:SS codepeer version : 18.2 (yyyymmdd) host : Windows 64 bits command line : codepeer -P my_project.gpr codepeer switches : -level max -output-msg -out report_file.out -show-header -show-info current run number: 4 base run number : 1 excluded file : /path/to/unit3.adb unit1.ads:1:1: info: module analyzed: unit1 unit1.adb:3:1: info: module analyzed: unit1__body unit2.adb:12:25: medium: divide by zero might fail: requires X /= 0 [...]
More switches can be used to influence which messages are emitted in
the report (such as
-show-info above). They are listed in sections
and .. Advanced -output-msg switches::.
Note that those switches influence reporting regardless of where the results
are output (option
3.8. CSV Output¶
You can use the
-csv switch via e.g.
codepeer -output-msg[-only] -csv
to generate messages in a CSV format, suitable for use by other tools such
as spreadsheets. Optionally, you can choose to redirect this output in
a file using option
-csv-out. For example:
codepeer -Pprj -output-msg-only -csv-out messages.csv
will generate a file
messages.csv with the following contents: first
a heading listing all columns:
then one line for each message, with each field separated by a comma, e.g:
f.adb,71,7,dead code,added,FALSE,low,warning,"dead code because F > 0",uncategorized,"561","","",func,,,6
The columns available are:
Full pathname of the file containing the message
Line number of the message
Column number of the message
added if message is new relative to the baseline, removed if message has been removed relative to the baseline, and unchanged otherwise, see also CodePeer Baselines
TRUE if the message has been reviewed manually, FALSE otherwise
Ranking of the message (annotation, info, low, medium, high, suppressed)
Message kind (check, warning, info, annotation)
Text of the message, surrounded by double quotes
Review classification (uncategorized, pending, false_positive, not_a_bug, bug)
List of relevant Common Weakness Enumeration ids for the given message. See Description of Messages for more information about CWE.
Only relevant for Precondition_Checks. This is a list of the checks that contributed to the associated precondition (that is, checks that might fail if the precondition is violated by a caller).
A subset of the Checks list, the result of omitting (in some cases) certain checks which experience has shown are less likely in practice to provide useful information. The omitted checks, if any, are typically validity checks or numeric overflow checks.
Fully qualified name of the enclosing subprogram
Timestamp of the last review, if relevant.
Name of the review approver, if relevant.
Last review comment, if relevant.
Unique identifier of the message.
The CSV output is emitted in the file specified by the switch
-csv-out if it set, and in the one specified by
not. If none of these options are positioned, the report is printed on
the standard output.
3.9. Security Report¶
You can ask CodePeer to perform a security oriented analysis and generate
a report in HTML format via the
--security-report switch, which can be
combined with other switches, in particular
-level. For example:
codepeer -Pprj -level 3 --security-report
This switch will perform a CodePeer analysis focused on the following potential vulnerabilities (see Description of Messages for more details):
unprotected shared access
mismatched protected access
test always false
test always true
unused out parameter
loop does not complete normally
and will generate in the output directory (
default) a static HTML file called
with two CSS files called
This HTML file contains the following sections:
Potential Vulnerabilities Detected by CodePeer
Security Vulnerabilities Not Present
List of Ada Source Files Analyzed by CodePeer
3.10. Saving CodePeer Results in Configuration Management¶
This section describes the various options available to store CodePeer results in a Configuration Management tool, such as subversion or git so that this information can be retrieved at any point.
All the information concerning messages, annotations, and user comments is stored in the CodePeer database (file xxx.db/Sqlite.db, where xxx.db is the directory specified by the Database_Directory attribute in your project file, and is <project>.db by default when using GNAT Studio). So if you want to be able to perform new runs on top of previous ones, this is the directory that you need to save.
In addition, in order to display run results for an existing database without re-running CodePeer, the following files in the xxx.output directory are needed:
From these files, you can regenerate output in various formats (HTML, text listings, GNAT Studio output) using the -output-msg-only CodePeer switch from the command line. The above files can also be used to display the CodePeer report using the menu.
There are also some files under the codepeer directory that are used as a cache by GNAT Studio for speeding up:
As well as a related subdirectory under the xxx.output directory, where information about CodePeer annotations are stored for GNAT Studio use in displaying annotations in the source editor:
The above two GNAT Studio-related XML files can be saved to ensure faster code review later after the files have been restored, when using the option suggested below where files are saved under configuration management. In order to be effective, it’s important to restore the timestamps of these files, or at least to ensure that the files’ timestamps are more recent than that of the database (see Copying Remote Results Locally). The annotations subdirectory can be saved as well, but need not be if display of annotations isn’t needed. (Be aware that the copying of this subdirectory can slow down the execution of codepeer_bridge when the subdirectory contents are large.)
Note that the database file does NOT contain source files, so when running CodePeer to e.g. regenerate the HTML output or display sources in GNAT Studio, you also need to provide access to the corresponding version of the source files.
In addition, the database is a historical database, which means it contains all previous analysis performed and allows for comparisons and changing the baseline (see CodePeer Baselines).
So one option is to save all these files as well as the corresponding source version number of your CM system.
Another option is to save in configuration management (instead or in addition) the various output produced by CodePeer (e.g. the HTML output, and/or the text output). See HTML Output and Text Output for more details.
A third option is to use the -output-msg-only or -output-msg CodePeer switches to generate and save the raw messages. For example the following command:
codepeer -P<project> -output-msg-only
will list all messages generated by CodePeer in a compiler like format. This output can be redirected to a file and put under configuration management.
See also Text Output for more relevant switches, in particular the -show-annotations and -show-manual-reviews switches.
Using -output-msg-only you can save in text format all the information you’d like to keep in configuration management.