6. CodePeer Usage Scenarios

There are many different ways to take advantage of CodePeer capabilities, depending on the stage of the project’s lifecycle. During development phases, CodePeer can either be run every day or as often as there is a build of some part of the system. In this mode, using CodePeer to find defects before a project enters run time testing minimizes the number of crashes encountered during testing, allowing functional tests to run to completion and thereby providing more usable output for test engineers. On existing code being maintained, it can be used to find potential problems that are not revealed by testing and which can be extremely difficult to track under deployement. It can also be used to perform a change impact analysis. In high-integrity and certification-related environment, it can be used to demonstrate the absence of certain kind of errors or improve the quality of code review activities.

This section will describe in detail various situations and point to the most useful features of the tool for each of them.

The following scenarios illustrate how CodePeer can be used to:

  • Find Defects in Existing Code
  • Performing a Change Impact Analysis
  • Use Annotations for Code Reviews
  • Identify Possible Race Conditions
  • Provide Evidence for Program Verification

6.1. Initial setup

Examples in this section will be done using the GPS Demo example, located under <GNAT install>/share/examples/gps/demo. It assumes prior GPS installation.

In order to run codepeer for a full analysis (generation of SCIL files and analysis of all files), you can run on the command line:

$ codepeer -P demo.gpr

You can then read messages either in GPS (through the menu CodePeer -> Display Code Review) or by running codepeer again with the -output-msg-only switch:

$ codepeer -P demo.gpr -output-msg-only

Before going any further, it should be noted that no matter how it’s configured, CodePeer performs a resource-intensive computation, both in terms of memory and processor time. For this reason, we highly recommended using a 64 bit version of the tool on a machine containing as many cores as possible. CodePeer will take advantage of multi-cores through the -jxxx switch, 0 meaning use the number of cores available on the system. Thus the recommended usage is:

$ codepeer -P demo.gpr -j0

Of course this will not make any difference for the demo project, but will on an actual large application.

See Running CodePeer from the Command Line for more information

6.2. Find Defects in Existing Code

One of the most interesting capabilities of the tool is its ability to find bugs in an existing application. However, while this is one of the most promising way to using the technology, this is also one of the most challenging. This is due to several characteristics of the kind of static analysis performed by CodePeer:

  • The tool performs a sound (messages reported with high rank are certain) and complete (all possible errors are reported) analysis, that is to say, it will not miss errors. As a result, there may be many false positives, in particular for large base of existing code.
  • The tool performs its analysis through all paths and calls. As a result, there is a potentially complexity explosion on large base code that may result in the analysis taking a large amount of time.

This section deals with how to mitigate those two problems and use CodePeer in a pragmatic way to provide quick improvements on existing large codebases.

6.2.1. Start with the most simple analysis

The initial objective when running CodePeer on existing application is to find potential problems that have not been spotted by testing and reviews. The goal is not to find all of them, but instead the most obvious and potentially problematic ones. Therefore, in this mode, we try to minimize the amount of “false-positive” even if it means missing potential problems (therefore, this trade-off makes the tool incomplete). Once this first set of problems is dealt with, we can extend the analysis to find the next set of problems.

There are two advantages to following this path. This analysis takes less computing time. It also gives a manageable iterative path to address problems found by the tool, helping to prioritize the most important ones.

The main switch to configure the depth of analysis is -level, going from 0 (minimum) to 4 (maximum, sound and complete analysis). At level 0, CodePeer will run as fast as possible and minimize the amount of false-negative, while possibly missing some potential problems. For an initial analysis, level 0 should be forced, then raised once the analysis at this level is complete.

In the demo example, you will run:

$ codepeer -P demo.gpr -level 0

In the example, this mode reports only 4 message to be analyzed: two potential run-time errors (array check might fail), one loop that does not terminate, and one suspicious pre-condition. By comparison, level 4 will give 18 messages.

See Command Line Invocation for more information on CodePeer switches.

6.2.2. Analyze High / Medium messages

Once the analysis is done, CodePeer ranks messages in categories depending on how interesting, or likely to happen, a potential failure is. High designates failures that will almost certainly happen, Medium failures those that have a reasonably probability and Low failures those that are somewhat unlikely. While analysing existing code, you should look at High messages in priority - first High, then Medium, and finally Low.

When looking at messages ranked Medium, you may find a number of “false-positive”, that is situations where the code is actually fine. It’s possible to provide a review analysis for these, for example by clicking on the “note” icon in front of the message in GPS, so that the message is hidden from further analysis.

In the demo example in particular, there’s a clear warning that is intended, the infinite loop in sdc.adb. This loop is clearly the application main loop, not expected to terminate. It’s a good idea to remove the message with an associated review comment.

See Categorization of Messages for more information on message categories.

6.2.3. Customize message ranking

After a first pass over CodePeer reports, you will probably find that some reported errors are very accurate, while some others are not. This is due to many factors, including the coding style and the type of problem that the application is addressing. This varies from one application to another. It’s good practice to adjust CodePeer analysis so that messages categories that are less accurate get a lower ranking, helping reviewers concentrating on messages that provide the best return on investment.

This can be done through configuration of a message pattern file. For example, on the demo.gpr project, we’re going to remove the “array index check might fail” category of messages as seen in input.adb.

This is done by providing an extra message pattern configuration file. We’ll name it “additional_pattern.xml”:

<?xml version="1.0"?>
<Message_Probability_Rules>
   <Message_Rule Matching_Probability="SUPPRESSED">
      <Message_Pattern>
         <String_Specifier
          Name_Of_String_Attribute="LIKELIHOOD"
          String_To_Match="CHECK_MIGHT_FAIL" />
         <String_Specifier
          Name_Of_String_Attribute="CHECK_KIND"
          String_To_Match="ARRAY_INDEXING_CHECK" />
       </Message_Pattern>
   </Message_Rule>
</Message_Probability_Rules>

This can then be used in the CodePeer command line:

$ codepeer -P demo.gpr -level 0 -additional-patterns additional-patterns.xml

Re-loading the analysis in, e.g. GPS, removes the messages from the analysis.

See Format of MessagePatterns.xml File for more information on the format of the message pattern file.

6.2.4. Add message review pragmas

As is discussed elsewhere in this document, CodePeer supports manual review of the status of a message after it has been generated (see the discussion of the Set Review Status drop down box in Edit Message Window (Provide Message Review), and of the -show-reviews[-only] command line option for codepeer -output-msg[-only] in Text Output).

A similar effect can also be achieved, if this is desired, by adding annotation pragmas to the Ada code that is being analyzed.

For the following (contrived) example:

function Func return Integer is
   X, Y : Integer range 1 .. 10000 := 1;
begin
   for I in 1 .. 123 loop
      X := X + ((3 * I) mod 7);
      Y := Y + ((4 * I) mod 11);
   end loop;
   return (X + Y) / (X - Y);
end Func;

CodePeer generates the following message:

func.adb:8:24: medium: divide by zero might fail: requires X - Y /= 0

As it happens, this message is a false positive; the function will always safely return -4, but CodePeer is unable to deduce this fact.

One way to handle this situation is to justify the message by adding an Annotate pragma.

Consider adding a pragma as follows:

function Func return Integer is
   X, Y : Integer range 1 .. 10000 := 1;
begin
   for I in 1 .. 123 loop
      X := X + ((3 * I) mod 7);
      Y := Y + ((4 * I) mod 11);
   end loop;
   return (X + Y) / (X - Y);
   pragma Annotate (CodePeer, False_Positive,
                    "Divide By Zero", "reviewed by John Smith");
end Func;

With this modification, CodePeer displays no message for this example.

However, if manual reviews are displayed (for example, if codepeer is invoked with the -output-msg-only -show-reviews-only switches), the following is displayed:

func.adb:8:24: suppressed: divide by zero might fail: requires X - Y /= 0
   Review #1: false_positive: Approved by Annotate pragma at 9:4: reviewed by John Smith

See section Through Pragma Annotate in Source Code for more details on how to use this pragma.

6.2.5. Improve your code specification

Going back to the problem we identified, completely removing all suppressed array checks may not be the best approach. Indeed, in this particular case, it may be better to annotate the Ada code with additional information to help going further in the analysis.

Ada provides various ways to improve the specification of the code and thus allowing static-analysis tools to reason on additional information. For example:

  • explicit range given on types, subtypes and variables
  • explicit assertions written in the code
  • pre and post conditions
  • predicates and invariants

In this example, instead of hiding the error, we’re going to extend the contract of the subprogram Get_Char so that we can make sure the callers are correct.

First, let’s look more closely at the subprogram itself:

function Get_Char return Character is
   C : Character;
begin
   --  First check if the line is empty or has been all read.

   if End_Line then
      Read_New_Line;
   end if;

   C := Line (First_Char);
   First_Char := First_Char + 1;

   return C;
end Get_Char;

The potential error occurs on the access to the element of line, where First_Char might not be smaller or equal to 1024. So instead of removing this error, we’re going to provide as a precondition of Get_Char the fact that the value has to be within expected range:

function Get_Char return Character;
pragma Precondition (First_Char <= 1_024);

Re-running CodePeer without the message pattern does remove the error. Now First_Char is assumed to be below 1024 and thus there is no potential error. Note that this pragma doesn’t have any effect in the executable code by default - it’s just here for documentation purposes and used by the additional tools. However, when using the GNAT compiler, a corresponding dynamic check can be added (compiling with the flag -gnata).

It would be interesting to understand the effects of this precondition in its caller. At level 0, the analysis does not take any callgraphs into account, so no additional problems are spotted. However, when increasing the level of analysis, specifically to level 2, an additional potential run-time error will be spotted on this piece of code:

procedure Skip_Spaces is
   Current_Char : Character;
begin
   loop
      Current_Char := Input.Get_Char;
      exit when Current_Char in Printable_Character;
   end loop;
   --  We must unread the non blank character just read.
   Input.Unread_Char;
end Skip_Spaces;

There’s a real problem here, as the loop may indeed go beyond the expected limit. Manually raising potential problems to precondition allowed us to move potential problems up a level, which can then be fixed at the proper location, for example here by adding an additional exit condition:

loop
   exit when First_Char > 1_024;
   Current_Char := Input.Get_Char;
   exit when Current_Char in Printable_Character;
end loop;

6.3. Performing a Change Impact Analysis

When maintaining a codebase over a long time, one common situation is to have to make small changes in a large application without having the proper means to validate this change. This is in particular true when full system testing is impractical, or where the initial expertise is no longer available. However, making a change as small as a few lines, presumably fixing a bug, may have an impact in a completely different location in the code, which is extremely hard to foresee.

In this context, CodePeer will provide a useful way to assess potential impacts on the code by identifying the differences in terms of potential vulnerabilities before and after the change.

6.3.1. Running the First CodePeer Baseline

The first step is to run CodePeer before the change. The result of the tool is completely irrelevant here, as we will only concentrate on the new messages. Indeed, we can probably assume that all potential vulnerabilities identified by the tool are likely not to be an actual problem on an application that has been deployed for a long time, thus “proven in use”. However, any new message coming from the fresh modification is highly suspicious.

The quality of this run is extremely important for getting meaningful results. Depending on the size and complexity of the application, a trade-off needs to be made between two issues:

  • running too ambitious an analysis may lead to physical memory exhaustion or too much time taken. In both of these cases, CodePeer will automatically cut the analysis so that it can carry on working, but this point of cut may not be deterministic. So the level of analysis must be low enough so that the run is “complete” and deterministic.
  • too restrictive an analysis will prevent propagating the effects of a change. Indeed, at the lowest levels, no contract propagation will be made between subprograms. The analysis is local. While this is appropriate in particular when considering the previous scenario, it will defeat the purpose of impact analysis, where we’re looking an implication of a change across the boundaries of modules.

By default, a level 3 analysis is advised to run CodePeer for the purpose of impact analysis, e.g.:

$ codepeer -P demo.gpr -level 3

This provides a decent trade-off, enabling cross-module analysis. If this analysis takes too much time, using level 2 or decreasing the partition size through the -dbg-partition-limit switch may help (its default value is 3000), resulting in a less-accurate impact analysis.

In certain rare cases, the analysis may differ from one run to the other due to random timeout. In such cases, see Time Limitations for information on how to mitigate these problems.

6.3.2. Making the Change

Once a deterministic analysis is done, the actual code modification can be done. Note that in this context, we’re assuming a relatively well-identified change in the code, possibly a few lines. If a lot of code and units are added, lots of new messages may appear in the code, and partitioning may have to be changed, which lowers the benefits of this way of using the tool.

6.3.3. Viewing the Impact of the Change

After the change is made, both the SCIL generation phase and the CodePeer analysis need to be run on the new codebase with the exact same switches. CodePeer will automatically record separately the new messages.

New messages can then be filtered out, either from the graphical interfaces (GPS, GNATbench), from the web view, or from the CSV view. To generate the CSV with our case study, codepeer may be run in the following way:

$ codepeer -P demo.gpr -output-msg-only -csv -out messages.csv

Importing this CSV (messages.csv) into, for example, Excel will allow filtering values for which the “History” field is “added”. This will in effect identify messages that have been added since the first run.

6.3.4. Fixing the Code in Case of Problems

The impact analysis may reveal potential problems, and imply corrective actions. The two previous steps may be repeated as many times as the outcome is not appropriate. Note that CodePeer will always identify new messages relative to the previous baseline. By default, the baseline is the first CodePeer run - so in the current example, every message that appeared somewhere after this first run would be flagged “new”. This can be further controlled by the -baseline, -set-baseline-id, and -cutoff codepeer switches if needed.

6.4. Use Annotations for Code Reviews

Whether a formal team process or an ad-hoc, one-person activity, manually reviewing source code is a good way to identify problems early in the development cycle. Unfortunately, it can also be quite time consuming, especially when the reviewer is not the author of the code. CodePeer reduces the effort required for understanding source code by characterizing the input requirements and the net effect of each component of the code base.

Specifically, CodePeer determines preconditions and postconditions for every Ada subprogram it analyzes. It also makes presumptions about external subprograms it calls whose source code is not available for review, or which are so complex that they threaten to exhaust the available machine resources. CodePeer displays preconditions, presumptions, and postconditions within the GPS source editor and in its web-based source listings as an Ada comment block immediately preceding the first executable line of the subprogram. If a large number of conditions are found, the display list will be truncated and an ellipsis (...) substituted for the undisplayed messages. You may display all of the preconditions, presumptions, and postconditions in the bottom pane of the File Source view by clicking on the P/P link at the top of the comment block or on the ellipsis (...) at the bottom of a truncated comment block.

The preconditions displayed by CodePeer are implicit requirements that are imposed on the inputs to a subprogram, as determined by analyzing the algorithms used within the subprogram. Violating preconditions might cause the subprogram to fail or to give meaningless results. During code review, the reviewer can verify that the preconditions determined by CodePeer for the code as written are appropriate and meet the underlying requirements for the subprogram.

Early in a development cycle, system documentation might be missing or incomplete. Since CodePeer generates preconditions for each module without requiring the entire enclosing system to be available, it can be used before system integration to understand subprograms as they are developed. In a mature, maintained codebase the system documentation might no longer agree with current code’s behavior. In either case, CodePeer’s generated preconditions can be used to verify both the written and unwritten assumptions made by the codewriters.

Presumptions represent assumptions made by CodePeer about the results of a call on a subprogram whose code is unavailable for analysis. A separate presumption is made for each call site to the unanalyzed subprogram, with a string in the form @<line-number-of-the-call> appended to the name of the subprogram. Presumptions do not generally affect the preconditions of the calling routine, but they might influence postconditions of the calling routine.

Postconditions are characteristics of the output which a subprogram could produce, presuming its preconditions are satisfied and the presumptions made about unanalyzed calls are appropriate. Even in the absence of other documentation, postconditions can help a reviewer understand the purpose and effect of code. Likewise, postconditions can be helpful to software developers who use a subprogram. Comparing postconditions to either preconditions or the context of calling routines can provide valuable insight into the workings of the code which might not be obvious from solely a manual review.

6.5. Identify Possible Race Conditions

CodePeer detects common forms of race conditions. A race condition may exist if there are two or more concurrent tasks that attempt to access the same object and at least one of them is doing an update. For example, if a Reader task makes a copy of a List Object at the same time a Writer task is modifying the List Object, the copy of the List Object may be corrupt. Languages such as Ada use synchronization or locking as a means to guard against race conditions. CodePeer identifies race conditions where synchronization is not used or is used incorrectly. (Note that the current release of CodePeer does not identify potential deadlocks, also known as deadly embraces, where two tasks are stuck waiting on locks held by the other.)

A lock is held during any protected subprogram or protected entry call. Any variable that can be accessed by more than one referencing task simultaneously must be locked at every reference to guard against race conditions. Furthermore, the referencing lock should match the lock used by other tasks to prevent updates during access. If locking is absent, or if one reference uses a different lock than some other reference, CodePeer identifies a possible race condition. Note that an identified race condition is not guaranteed to create problems on every execution, but it might cause a problem, depending on specific run time circumstances.

Note that if a lock is associated with an object that is newly created each time a subprogram is called, it does not actually provide any synchronization between distinct calls to that subprogram. A lock is only effective if it is associated with an object visible to multiple tasks. CodePeer ignores locks on objects that are not visible to multiple tasks since they have no synchronizing effect. This means CodePeer may indicate there are no locks held at the point of a reference to a potentially shared object even though there are in fact some local locks held. A future release will identify any potential problems associated with local locks more explicitly.

CodePeer must understand the tasking structure of the program being analyzed to detect race conditions. There are two types of entry points that are important to race condition analysis: Reentrant entry points and Daemon entry points. A Reentrant entry point represents code that can be invoked by multiple tasks concurrently (e.g. task types). A Daemon entry point (also called a singleton) is presumed to be invoked only by a single task at a time (e.g. task bodies).

Standard Ada tasking constructs (such as tasks and protected objects) are identified automatically by CodePeer as needed. In addition, you can manually identify reentrant entry points with the -reentrant “module:subp” option on the CodePeer command line. Use the -daemon “module:subp” to identify daemon entry points. See Command Line Invocation for the syntax for these options.

You may also identify Reentrant and daemon procedures for CodePeer by using the GNAT-defined pragma Annotate. This pragma has no effect on the code generated for the execution of a program: it only affects CodePeer’s race condition analysis. In this example:

package Pkg is
   procedure Single;
   pragma Annotate (CodePeer, Single_Thread_Entry_Point, "Pkg.Single");
   procedure Multiple;
   pragma Annotate (CodePeer, Multiple_Thread_Entry_Point, "Pkg.Multiple");
end Pkg;

CodePeer will assume that Pkg.Single is a single thread entry point (or “daemon”) procedure and that Pkg.Multiple ia multiple thread entry point (or “reentrant”) procedure. An Annotate pragma used in this way must have exactly three operands: the identifier CodePeer, one of the identifiers Single_Thread_Entry_Point or Multiple_Thread_Entry_Point, and a string literal whose value is the fully qualified name of the procedure being identified.

To allow one pragma to apply to multiple subprograms, the final string literal may also have the same “wildcard” syntax supported by the -reentrant and -daemon command line options. In this example:

package Foo_Procs is
   procedure Foo_123;
   procedure Foo_456;
   pragma Annotate (CodePeer, Single_Thread_Entry_Point, "Foo_Procs.Foo*");
   procedure Foo_789;
end Foo_Procs;

the pragma would apply to the two procedures which precede it. If the same pragma were used as a configuration pragma in an associated configuration pragma file (described below), the pragma would apply to all three procedures.

Except when used as a configuration pragma (described below), the pragma must occur in the same immediately enclosing declarative_part or package_specification as the procedure declaration, not before the procedure’s declaration, and not after its completion. For a general description of pragma Annotate, see the GNAT Reference Manual.

Annotate pragmas may be used as configuration pragmas. In the preceding example, the same pragmas could have been present in an associated configuration pragma file (e.g., a gnat.adc file).

You might need to specify your own entry points explicitly to include all relevant external entry points, including call backs from external subsystems and interrupt entry points.

For partitions that include one or more task entry points, an indication of zero detected race conditions ensures there is no path within that partition from one of these entry points to any of the three kinds of unsynchronized access to shared data objects identified by CodePeer.

CodePeer performs race condition analysis by default. This helps to ensure that potential race conditions are identified early. Locating race conditions with run-time testing can be difficult since they normally cause problems only intermittently or under heavy load. Note that you may use the -no-race-conditions command line parameter to suppress race condition analysis.

Some programs make use of user-defined mutual exclusion mechanisms instead of using language-defined protected actions. If a pair of procedures (often with names like Lock and Unlock, Acquire and Release, or P and V) are used to implement mutual exclusion, pragma Annotate may be used to communicate this information to CodePeer. This pragma has no effect on the code generated for the execution of a program; the pragma only affects CodePeer’s race condition analysis. Given this example:

package Locking is
    procedure Lock;
    procedure Unlock;
    pragma Annotate (CodePeer, Mutex, "Locking.Lock", "Locking.Unlock");
end Locking;

CodePeer will assume that a call to Lock acquires a lock and a call to Unlock releases it. If the following procedure is then called, for example, from the body of a task type:

procedure Increment_Global_Counters is
begin
    Counter_1 := Counter_1 + 1;
    Locking.Lock;
    Counter_2 := Counter_2 + 1;
    Locking.Unlock;
end Increment_Global_Counters;

CodePeer’s race condition analysis will flag only the use of Counter_1 as being potentially unsafe.

CodePeer trusts the accuracy of the pragma; no attempt is made to verify that the two procedures really do somehow implement mutual exclusion. An Annotate pragma used in this way must have exactly four operands: the identifier CodePeer, the identifier Mutex, a string literal whose value is the fully qualified name of (only) the lock-acquiring procedure, and a corresponding string literal for the lock-releasing procedure. Except when used as a configuration pragma (described below), the pragma must occur in the same immediately enclosing declarative_part or package_specification as the two procedure declarations, not before either procedure’s declaration, and not after either procedure’s completion.

Annotate pragmas may be used as configuration pragmas. In the preceding example, the same pragmas could have been used in an associated configuration pragma file (e.g., a gnat.adc file). For a general description of pragma Annotate, see the GNAT Pro Reference Manual.

The Race Condition report gives complete details about the shared objects in your program that might be subject to unsafe access. The Race Condition report is divided into three panes. The upper right pane, which runs horizontally, summarizes which subprograms CodePeer has determined are daemon-task entry points or reentrant entry points. The narrow pane which runs along the left shows the shared objects that might be involved in a race condition. The third pane, which is the large pane taking up the lower right of the report, has a table summarizing the kinds of race conditions associated with each object and provides further information below, viewed by clicking on the name of a particular object. For each shared object, there is a summary report and a detailed report. The summary report identifies which entry point is associated with each possible race condition. The detailed report goes further by identifying every reference to the object organized by task entry point, locks held (L1, L2, etc.), and whether it is a read (R) or a write (W) access. A key at the bottom indicates the actual Ada object associated with each lock.

As mentioned above, CodePeer only concerns itself with locks that are visible to multiple tasks, so an indication of None in the locks held column means no task-visible locks are held. There may be locks associated with locally created objects, but these provide no effective synchronization between distinct tasks.

See Race Condition Messages for details on the messages produced by race condition analysis.

6.6. Provide Evidence for Program Verification

CodePeer’s core is a sound and complete static analyzer, which means that it is designed to consider all possible run-time errors and not miss any due to approximations in the analysis. See How does CodePeer Work? for more details on CodePeer’s internals.

For each possible run-time error in Ada, corresponding to raising an exception (e.g. Constraint_Error) during execution, CodePeer either outputs an error message classified as High, Medium, or Low depending on the likelihood of an error occurring, or it outputs no message, meaning that no error is possible at this program point. An error ranked High corresponds to a certain error, while an error ranked Medium or Low corresponds to a possible error depending on the context (or a false positive). The distinction between Medium and Low facilitates prioritizing the most probable errors for review.

By default, a set of prioritizing rules, both internal and external (in a user modifiable file) can hide errors and/or warnings that have a very low probability of causing actual errors. In order to recover all errors and warnings, e.g. to meet a certification requirement, you can run CodePeer in a full-report mode (option -level max, see other relevant options at Report File). Because a run of CodePeer can exercise as much code as exhaustive testing of the same program, inspecting the output of such a run gives exhaustive guarantees about the behavior of the program. The output consists both of prioritized errors and warnings and generated preconditions/postconditions for subprograms.

The results of CodePeer’s analysis are with respect to the preconditions that it generates so that a manual review of the preconditions generated for top-level subprograms (main program or external subprograms for the analysis of a library) is needed to ensure no error or warning is missing due to a too restrictive context for the analysis. This is not needed for all other subprograms, as CodePeer automatically checks that the generated preconditions are verified at calls. This is also not needed if you specify your main subprograms in your project file since CodePeer will flag all preconditions on such main subprograms automatically. See Understanding the differences between preconditions and run-time errors for more details on preconditions and runtime errors.

In the context of software certification, CodePeer can be used in particular to perform:

  • Runtime error analysis (or Boundary value analysis). CodePeer can be used to automatically detect attempts to dereference a pointer that could be null, values outside the bounds of an Ada type or subtype, buffer overflows, numeric overflow or wraparound, and division by zero.
  • Control flow analysis. CodePeer can be used to detect suspicious and potentially incorrect control flows, such as unreachable code, redundant conditionals, loops that either run forever or fail to terminate normally, and subprograms that never return.
  • Data flow analysis. CodePeer can be used to detect suspicious and potentially incorrect data flows, such as variables read before they are written (uninitialized variables), variables written more than once without being read (redundant assignments), and variables that are written but never read.