SPARK

Examiner User Manual

 

 

 

 

 

 

 

 

 

 

 

EXM/UM

Issue: 10.1

Status: Definitive

15th December 2011

 

DOCTYPE:Praxis title:varchar=title reference:varchar=reference status:varchar=status issue:varchar=issue date:varchar=date projectname:varchar=DocumentSet

Issue 10.1, Definitive, 15th December 2011

000_000 Examiner User Manual

 

 

 

Originator

 

 

SPARK Team

 

 

 

Approver

 

 

SPARK Product Manager

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



Copyright

The contents of this manual are the subject of copyright and all rights in it are reserved. The manual may not be copied, in whole or in part, without the written consent of Altran Praxis Limited.

 

Limited Warranty

Altran Praxis Limited save as required by law makes no warranty or representation, either express or implied, with respect to this software, its quality, performance, merchantability or fitness for a purpose. As a result, the licence to use this software is sold ‘as is’ and you, the purchaser, are assuming the entire risk as to its quality and performance.

Altran Praxis Limited accepts no liability for direct, indirect, special or consequential damages nor any other legal liability whatsoever and howsoever arising resulting from any defect in the software or its documentation, even if advised of the possibility of such damages. In particular Altran Praxis Limited accepts no liability for any programs or data stored or processed using Altran Praxis Limited products, including the costs of recovering such programs or data.

SPADE is a trademark of Altran Praxis Limited

Note:  The SPARK programming language is not sponsored by or affiliated with SPARC International Inc. and is not based on SPARC™ architecture.


Contents

1          Overview

2          General description

2.1      Examiner Operation

2.2      Verification Conditions and Dead Path Conjectures

3          Operating the Examiner

3.1      Command Line Switch Summary

3.2      The Default Switch File

3.3      Exit status

4          Input and output files

4.1      The report file

4.2      The index file

4.3      The meta file

4.4      The target compiler data file

4.5      The target configuration file

4.6      The warning control file

4.7      VC and DPC output file structure

4.8      HTML Output

4.9      SLI files

4.10    Output Directory Control

5          Lexical and syntactic analysis

5.1      General description

5.2      Error messages

5.3      Reserved words

6          Static semantic analysis

6.1      General description

6.2      Error messages

6.3      Warning messages

6.4      Notes

7          Control flow analysis

7.1      General description

7.2      Error messages

8          Data and information flow analysis

8.1      General description

8.2      Information flow analysis

8.3      Data flow analysis

8.4      Automatic selection of flow analysis mode

8.5      Recommended use of derives annotations

8.6      Checking safety and security policies using flow analysis

8.7      Error messages

9          Controlling the display of warnings and flow errors

9.1      The accept annotation

9.2      The warning control file

10       Verification condition generation

10.1    General description

10.2    Error messages

11       Analysing automatically generated code

11.1    The KCG language profile

11.2    KCG language profile and parent access to its public child

12       Static limits and associated error messages

A          Appendix: Information-Flow and Data-Flow Analysis of while-Programs

Document Control and References

File under

Changes history

Changes forecast

Document references

 

 

 


1                       Overview

SPARK is an annotated sublanguage of Ada, intended for high-integrity programming. The language exists in variants based on Ada 83, Ada 95 and Ada 2005.  The SPARK language is described in two separate reports - “SPARK - the SPADE Ada Kernel” [1] (covering both the 95 and 2005 variants) and “SPARK 83 - the SPADE Ada 83 Kernel” [2]; these reports will be referred to here as the SPARK Definition. The Examiner is operated in the same way on SPARK 83, SPARK 95 and SPARK 2005 source code, although the rules which it implements differ according to the language variant selected.

As well as imposing language restrictions, to eliminate ambiguities and insecurities which exist in the full Ada language, SPARK incorporates annotations, or formal comments. These are of two kinds:

Core annotations whose use is imposed by certain language rules of SPARK (The SPARK Definition contains a list of these), and

Proof contexts which may be employed to include formal specifications in a SPARK text, and to guide its machine-assisted proof.

The Examiner is a free-standing software tool to support the use of this language. The Examiner always:

·          checks conformance of a text with the rules of SPARK.

·          checks consistency of executable code with its core annotations.

and optionally can be used to:

·          obtain Verification Conditions (VCs) for SPARK programs to which the SPARK Simplifier may be applied. The Simplifier reduces a substantial proportion of the provable conclusions in verification conditions to “true”. Any VC conclusions remaining to be verified, after application of the Simplifier, can be read by the SPARK Proof Checker, which can then be used to prove them interactively. The use of the Simplifier and Proof Checker is described separately, in other Altran Praxis documents.  This manual explains how to use the Examiner to generate verification conditions for SPARK programs. The process of proof is described in the manual “SPARK Proof Manual" [3].

·          obtain Dead Path Conjectures (DPCs) for SPARK programs to which the ZombieScope tool may be applied. ZombieScope analyses DPCs to find paths which are infeasible or dead. See the “ZombieScope User Manual” [4] for more details.

2                       General description

The Examiner supports several levels of analysis. The Examiner supports:

·          checking of SPARK language syntactic and static semantic rules

·          data flow analysis

·          data and information flow analysis

·          formal program verification via generation of verification conditions

·          proof of absence of run-time errors

·          dead path analysis

There is also an option to make the Examiner perform syntax checks only. Using this option on a source file does not require access to any other units on which the file depends, so files can be syntax checked on an individual basis. This allows any syntax errors to be corrected before the file is included in a complex examination.  This option must only be used as a pre-processor: the absence of syntax errors does NOT indicate that the source text is a legal SPARK program.

2.1               Examiner Operation

The Examiner will analyse the contents of all the source files named in the Examiner command line. Each file may contain any number of SPARK compilation units. Within a compilation unit, the body of any constituent subprogram or package may be fully implemented, or “hidden” (see SPARK Definition) or replaced by a stub, as appropriate.

For the Examiner to analyse a particular compilation unit, it may require access to some other compilation units. Specifically:

1         to analyse a package body, the Examiner requires access to its specification and to the specifications of its private children;

2         to analyse a package specification or main program which inherits other packages, the Examiner requires access to the specifications of all those inherited packages (and to the specifications of all packages which they themselves inherit, and so on);

3         to analyse a child unit, the Examiner needs access to the specification of its parent;

4         to analyse a sub-unit, the Examiner requires access to the unit containing its stub.

The two ways in which access to these units can be provided are described below.

Firstly, it is possible to specify all the files to be employed in an analysis in an Examiner command line (either directly, or indirectly by using meta files (see section 4.3)). In this case, the Examiner will analyse all the contents of those files, as it reads them, in the order in which their names occur in the command line. This order (and the order of units within each of the files) must be such that the Examiner only requires access to units which it has already analysed.

Alternatively, it is also possible to provide an index file (see section 4.2), which associates SPARK compilation units with the files which contain them. If an index file is supplied, the Examiner can use it to find compilation units it requires, but whose locations (file-names) have not been specified in the command line.

The index file is a text file that can be generated either directly by the user or by a user-supplied program. It could, for example, be produced by the “build” mechanism of a configuration-management tool. Facilities are provided for “inheriting” other index information (through a super-index), and for subdividing index information into “separate” files (using “auxiliary indexes”).

The Examiner checks conformance of the contents of the specified files to the rules of SPARK, performing all the tests that the visibility of the text allows. This analysis is performed in several passes. First lexical and syntax analysis of a compilation unit is carried out, allowing the Examiner to identify what other units are required to carry out the complete analysis of this compilation unit. The Examiner then reads all other required units, if they have not already been read. When all the required units of a compilation unit have been read the Examiner performs static-semantic analysis, and if this is successful, it goes on to perform data- and information-flow analyses.

2.1.1           Output files

The Examiner produces a listing file for each source file specified on the command line. This listing file has the same name as the source file but with the default file type .lst. It contains messages indicating the success of the analysis of each subprogram, or if an analysis is unsuccessful, indications of the errors found.

A single analysis report file is also produced. This lists the source files read, the compilation units contained within those source files, and all errors found during the analysis.

The report file and each listing file are terminated with a line reading

--End of file--------------------------------------------------

A similar line terminates the screen echo.

The Examiner can optionally produce report and listing files in HTML allowing them to be “browsed” using any Web Browser - see Section 4.8.

2.2               Verification Conditions and Dead Path Conjectures

If the options to generate VCs or DPCs (or both) have been selected by the user, the Examiner will, after performing the analyses described in the previous section, produce VCs and/or DPCs for each well-formed subprogram in the files specified for analysis.

It is recommended that VC or DPC generation is not attempted until the source text is free of SPARK semantic errors; this is because their generation takes additional time and the contents of the output files produced are unlikely to be valid unless the source is legal SPARK. It is not necessary to eliminate all flow errors from code before generating VCs (such errors should, of course, be considered for significance and acceptability).

2.2.1           Output files

When VC or DPC generation is selected, for each subprogram in the files specified on the Examiner command line the following additional output files are generated. They contain the FDL type declarations needed if the Simplifier, ZombieScope or Proof Checker are to be used, the results of the selected analysis and associated proof rules. The file extensions of these output files are as follows:

 

File

Extension

FDL Declarations

.fdl

Verification Conditions

.vcg

Rules

.rls

Dead Path Conjectures

.dpc

The naming of these output files is covered in more detail in section 4.7.

2.2.2           Relationship with the proof tools

The files produced by the Examiner can be manipulated by the proof tools. VCs can be simplified by the Simplifier and the proof of VCs too complex for automatic proof by the Simplifier can be attempted using the Proof Checker. DPCs are processed by ZombieScope. The POGS tool provides a summary of VCs and DPCs indicating their source and current status. Figure 1 overleaf gives an overview of the relationship between these four tools and the input and output files used to connect them.

Figure 1 Relationship of the Examiner and Proof Tools

3                       Operating the Examiner

The Examiner is command line driven. The command line syntax is given below.

Command-line = “spark” { Command-option } Argument-list

Argument-list = Argument { Argument }

Argument = ( File-spec [ Argument-option ] ) | Meta-file-spec

Argument-option = ( “ -listing_file=” file-spec | “ -nolisting_file” )

Meta-file-spec = “@”File-spec

Argument-option gives the name of the listing file for that argument. If an extension is not included in the supplied file-spec, the default listing extension is used. The option -nolisting_file” suppresses the generation of a listing file for the source file to which the option is applied (global suppression of listing files can be achieved by specifying the -nolistings command option see section 3.1.3). The results are always summarised in the report file. The default listing-file name is the name of the source file, with the source-file extension replaced by the listing-file extension.

The file-specs of the argument-list specify those files whose contents are to be analysed. All units found in files specified on the command line will be analysed. If an extension is not included in a given file-spec then the default source extension will be used (see below). A Meta-file-spec is the name of a file containing a list of files to be analysed: this option is described further in section 4.3. On Windows, a file-spec may contain space characters; however, where spaces are used the entire file-spec must be enclosed in double quotes.

The Examiner may also be invoked without command line arguments in which case it displays help information giving the options available.

The Command-options are given in the following table. Note that the option names may be abbreviated: the minimum abbreviation is that given in the column specifying the command syntax.

3.1               Command Line Switch Summary

3.1.1           SPARK Language Options

Option

Syntax

Default

Description

language

-la=language

95

Selects language rules for SPARK83, SPARK95, SPARK2005 or KCG. Valid values for this switch are “83”, “95”, “2005” or “KCG”. In the absence of this switch, the Examiner operates in SPARK95 mode.

See section 11 for more details on KCG.

profile

-pr=type

sequential

Selects a particular concurrency profile of the SPARK language which will be used during analysis.  The default is “sequential”.

The profile value “ravenscar”, which is only valid in SPARK95 and SPARK2005, selects the RavenSPARK concurrency profile which allows use of concurrent  language features.

ada83

-ad

 

Selects SPARK83 rules. This switch is equivalent to “-language=83” and is now considered obsolete, but is retained for compatibility with existing users’ scripts and projects.

3.1.2           Input File Options

Option

Syntax

Default

Description

source_extension

-so=file-type

.ada

Allows the user to specify a default extension for source files

index_file

-i=file-spec

 

Identifies the index file to be used initially. The index file mechanism is described in section 4.2

noindex_file

-noi

Suppresses the index file mechanism

warning_file

-w=file-spec

 

This identifies the warning control file to be used. A warning  control file is used to determine how certain warning messages generated by the Examiner are displayed. The format and content of a warning control file is described in section 9.2.

nowarning_file

-now

Provides full reporting of all warnings

target_compiler_data

-ta=file_spec

 

Specifies the name of the target data file: this file is described in section 4.4

notarget_compiler_data

-not

Suppresses the use of the target data file

config_file

-conf=file_spec

 

Specifies the name of the target configuration file: this file is described in section 4.5

noconfig_file

-noc

Suppresses the use of the target configuration file

noswitch

-nosw

 

Suppresses the use of the switch file.

3.1.3           Output File Options

Option

Syntax

Default

Description

output_directory

-ou=dirname

 

Specified directory into which Examiner listing files, report file, and VCs should be generated. Absence of this option (the default) means output files are generated in and below the current working directory. See section 4.10 for more details.

listing_extension

-li=file-type

.lst

Allows the user to specify a default extension for listing files.  The “_” character may be used as a “wild card” in the listing extension specified by this switch.  This character has the effect of preserving a character from the original source file extension.  For example, given source files p.ads and p.adb with -listing_extension=ls_ we would generate listing files p.lss and p.lsb.

report_file

-rep=file-spec

spark.rep

This specifies the report file name. The format of the report file is described in section 4.1. The default extension .rep is applied if no extension is given.

noreport_file

-nor

 

Suppresses the production of the report file

html

-ht[=dirname]

 

Enables generation of HTML output files in addition to plain text output.  “HTML” is the default directory name if none is given. This option cannot be used with the -noreport_file option.  HTML report files are described in section 4.8.

plain_output

-pl

 

Causes report and listing files to be produced without line numbers, error numbers, cross-references in output, or dates.  This can be useful when using “diff” utilities to compare analysis results because it leaves only the significant changes in the comparison output.

brief

-b[=nopath|
fullpath]

 

Causes on-screen errors and warnings to be issued in a “brief” format comprising
<filename>:<line>:<column>: <message>

This format is designed to allow integration with other development environments and tools.

The switch optionally takes a value “nopath” or “fullpath”. In the former case, the <filename> reported is just the simple base-name of the offending file with no directory or path information.

If “fullpath” is specified, then the <filename> includes the full path-name of the offending file. This is useful for projects with sources in multiple directories.

-brief alone is equivalent to –brief=nopath

makefile

-m

 

This option requires -brief to be set and further strips down the output to show only the absolutely essential. This flag is particularly useful when running the Examiner from a makefile. This option suppresses the following output:

·         The summary report  listing the number of errors/warnings and how many were suppressed.

·         Any otherwise non-suppressible notes, such as the note that dataflow analysis was performed only.

nolistings

-nol

 

Suppresses the generation of all listing files (to suppress individual listing files see section 3). The results are always summarised in the report file.

nosli

-nosl

 

Suppresses the generation of the SLI files.

3.1.4           Analysis Options

Option

Syntax

Default

Description

syntax_check

-sy

 

Given this switch, the Examiner will only check that the source files are in the SPARK syntax. Note that this does NOT check whether  a text is legal SPARK, and must only be used as a pre-processor

flow_analysis

-fl=type

information

The types are ‘information’, ‘data’  and ‘automatic’ and may be abbreviated to  ‘i’, ‘d’ and  ‘a’ respectively.

Section 8 describes the different types of flow analysis.

policy

-po=type

disabled

The predefined policies that may be selected are ‘safety’ and ‘security’. This option enables checking of the specified information flow policy. See section 8.6 for more details.

rules

-ru=policy

none

The policies are ‘none’, ‘lazy’, ‘keen’ and ‘all’ and may be abbreviated to ‘n’, ‘l’, ‘k’ and ‘a’ respectively.

This option defines the policy applied by the Examiner to replacement rule generation for composite constants.

noduration

-nodu

 

Causes the Examiner to not know anything about the predefined type Standard.Duration. This allow programs to use “duration” as an identifier.

vcg

-vc

 

Generates all VCs, including those for numeric overflow and real numbers.

dpc

-dp

 

Generates DPCs.

casing

-ca[=option]

 

Enable case-sensitive checking of identifiers and issue a warning if the use of an identifier does not match the case used in its declaration. The allowed values for option are ‘s’ to enable casing checks only for use of identifiers declared in package Standard, or ‘i’ for only identifiers declared in the program being analysed. If no value is specified for option then both types of check are enabled.

Note that in release 8.1.5 and above, the –rtc, -exp and –realrtcs switches have been removed in favour of the new –vcg switch, which is equivalent to the old combination “-exp –realrtcs”.

3.1.5           Debugging and Tracing

The Examiner has a single switch “debug” that allows various forms of information to be reported to the standard output stream during execution.

Secondly, a switch “dictionary_file” causes the Examiner to write the contents of its Dictionary data-structure in a readable form into the indicated file.

 

Option

Syntax

Default

Description

debug

-de=choices

none

The choices are a sequence of one or more upper- or lower-case characters, with the following effects:

e – print the syntax tree of each expression as it is processed, followed by a trace of the expression walking algorithm as it encounters each node in that expression tree.

h – Print a trace of HTML generation.

l – print a trace of entities looked up in the dictionary.

r – prints the required and computed flow relations for each subprogram after flow analysis.

f – print a trace of file handling, including creation, opening, and deleting of files.

u – Print a trace of required compilation unit and index file lookups.

i – Print default loop invariants (in FDL) as they are generated.

c – Print a trace of the component manager state.

p – Print parser state on detection of syntax error.

k – Trace ranking and printing of FDL declarations.

t – prints extra detail in the report file when
–statistics is active.

v – Print VCG state and BPG after DAG.BuildGraph.

V – as v, but also print BPG during each iteration of VC Generation.

d – Print FDL DAG to file dag_xxx.dot following BuildExpnDAG where xxx is an integer that uniquely identifies each DAG.

The “d”, “v” and “V” flags produce output in “DOT” language that may be used with the GraphViz toolset for visualization of these graphs. See www.graphviz.org

Any of these may be combined.

dictionary_file=filename

-di=filename

 

Write Dictionary contents to filename.

3.1.6           Other Options

Option

Syntax

Default

Description

noecho

-noe

 

Suppresses the Examiner's screen output.

annotation_character

-an=character

 

Allows selection of a character other than the default “#” to indicate the start of a SPARK annotation.  e.g. -anno=$ would cause annotation to begin “--$” rather than “--#”.
If the chosen character is significant to the command shell it may need to be escaped.

sparklib

-sp

 

Use the standard SPARK library. See the SPARK Library User Manual [5] for details.

statistics

-st

 

Causes the Examiner to insert statistics concerning the consumption of internal tables at the end of the report file. These statistics are described in section 4.1.

nostatistics

-nost

Suppresses the production of statistics on the consumption of internal tables

fdl_identifiers

-fd=string

reject

This switch controls the Examiner’s treatment of FDL reserved words (see section 5.3). The option -fd=reject, or any abbreviation of ‘reject’, rejects all FDL reserved words as being syntactically unacceptable. This is the default. The option -fd=accept, or any abbreviation of ‘accept’, suppresses the rejection of FDL reserved words, but also prevents the generation of any proof files. Any string other than ‘reject’ or ‘accept’ (or their abbreviations) causes recognised FDL reserved words to be mangled on output by placing the string before the identifier, separated by a double underbar. As an example, -fdl=praxis would cause the identifier start to be output as praxis__start. Previously this was a binary option (-fdl_identifiers and -nofdl_identifiers) and this form is still supported, but deprecated.

version

-ve

 

If this switch appears anywhere on an Examiner command-line, then the Examiner simply prints its banner information, the date, and static limits to the console.  All further processing of command-line switches and analysis is terminated.

This option is useful for quickly checking the version number, licence information, and static limits of an Examiner.

help

-he

 

Prints a summary of Examiner command line options to the screen.

original_flow_errors

-or

 

This switch affects the formatting of the flow errors, printing one line for every error (see section 8.4.4). 

error_explanations

-er=option

off

Option is one of: off, first_occurrence, every_occurrence (which can be abbreviated to o, f and e respectively).  Selects whether explanatatory notes are appended to Examiner error messages.  When first_occurrence is selected, the explanation appears only for the first occurrence of each different error mesage in each of the screen echo, listing files and report file.

The use of the html switch overrides error_explanations and turns them off for the report file only; this is because explanations are already only one click away when viewing the HTML report file.

justification_option

-j=option

full

Option is one of full, brief, ignore (which can be abbreviated to f, b and i respectively).  Selects how justifications introduced by the --# accept annotation (see section 9.1) are handled.  If ignore is selected then any accept annotations are checked for errors but do not have any effect on the display of the errors or warnings to which they refer.  If brief or full is selected then accept annotation do suppress the generation of their associated messages.  In the former case a brief summary count of the number of matching justifications appears in the report and listing file; in the latter, a more complete summary table of matches is produced.

3.2               The Default Switch File

The Examiner can read a default switch file as an alternative to placing the above switches on the Examiner’s command line each time it is invoked.  When run, the Examiner checks for the presence of a file called “spark.sw” in the current working directory.  If a file of this name is found its contents are interpreted exactly as if they had been typed at the start of the command line.  Switches in the default switch file may be entered in free format and the file may include Ada-style comments.  For example:

-------------------------------------------------

-- Default switch file for backward compatibility

-- with a “Standard” SPARK 83 Examiner

-------------------------------------------------

-language=83 -- select SPARK 83

-fdl=accept  -- ignore use of FDL identifiers

 

-- end of file-----------------------------------

When the Examiner finds and uses a default switch file it reports this in both its screen echo messages and in the report file.

The Examiner will not allow you to specify duplicate or contradictory options, e.g. both -statistics and nostatistics. This holds true whether the options are both specified on the command line, or the switch file, or a combination of the two. The one exception to this rule is the combination of -warn and -nowarn which can be used to override each other.  The Examiner will process whichever one is specified last.

It is sometimes desirable to prevent the Examiner from processing the default switch file. This can be done via the –noswitch command-line option (see section 3.1.1).

3.3               Exit status

The Examiner sets its exit status to reflect the success of its operation. This value is provided for use in automated scripts where a very concise summary is sufficient. The values are set according to the following table:

0

Success with no unjustified errors or warnings

1

Unjustified Warnings

2

Unjustified Flow Errors

3

Syntax/Semantic Errors

4-7

Reserved

8

Invocation Error e.g. contradictory command-line switches

9

Internal Error e.g. table overflow or internal exception

 

4                       Input and output files

4.1               The report file

The report file summarises the activity and findings of the Examiner. A report file has the following parts:

Banner The banner gives the tool name and release, and names the licensee of the tool. The date and time of analysis is also given.

Options An indication of the analysis options used by the Examiner, including default values.

Selected files A list of the filenames directly selected for analysis.

Index file list A list of the index files used.

Meta file list A list of the meta files used and the filenames read from them.

Warning selection This lists the causes of warning messages which will be provided in a summarised form rather than reported fully.

Target compiler data A list of the values read from the target compiler data file.

Target configuration file A listing of the target configuration file, including any syntactic or static-semantic errors detected by the Examiner.

Source file list A list of the full file-specs of all the source files used during analysis, both those supplied on the command line (including those specified in meta files) and those read using the index mechanism.

Units required but not found A list of the units which were required during the analysis but could not be located, because

·          there was no index file, or

·          the given index file did not contain a reference to the required unit, or

·          the index file associated the unit with a file which did not exist, or

·          the unit was not found in the file associated with it by the index file.

Source file entries A list of entries, one for each source file used. The entry contains the name of the source file, the name of the listing file (if any), a list of the units found or expected in that source file and finally a list of errors found in that file (if any). If a source file could not be opened this is indicated. For each entry in the list of units the Examiner reports whether

·          the unit was expected but could not be found - for example if the indication of its presence in a particular file, by the index, was erroneous;

·          only lexical and syntax analysis was attempted on the unit (which occurs for example if a required unit is preceded in its file by a unit which is not required);

·          a complete analysis, including static-semantic and flow analysis has been attempted.

·          a warning summary, summarising the suppressed warnings (if any) and noting (with an asterisk) those which have the potential to invalidate the analysis.

Resource statistics A list summarising the size and usage of the Examiner's internal tables. If a run of the Examiner exhausts any one of these tables, it will produce an error message and stop. Information from this list may be requested by Altran Praxis if we need to tailor an Examiner with especially large tables for a particular customer.

This information is only produced if the statistics option is selected.

The tables are:

·          the relation table which is used to construct the flow relations for each subprogram

·          the string table which is used to store the identifiers and strings found in the source text

·          the symbol table which stores the information about each object declared by the program

·          the syntax tree which stores a symbolic representation of each unit which is examined

·          the VCG heap which is used in the generation of Verification Conditions.

·          the Record components table which is used to perform flow analysis of record fields.

·          the Record errors table which is used in reporting and merging flow errors associated with record fields.

References Where the messages produced by the Examiner refer to external documentation, this section gives a list of the abbreviations used.

4.2               The index file

When analysing a compilation unit the Examiner often needs to analyse other compilation units before it can proceed. For example, if package A inherits package B then the Examiner needs to have analysed the specification of B before it can analyse A. The name of the file containing the required compilation unit may be specified directly on the command line or in a metafile. If not then the Examiner needs some means of determining which file contains the required compilation unit. The index file provides this mechanism for associating compilation units with the files that contain them.

4.2.1           Index file format

The index file is a text file, whose default extension is “.idx”.

Index-file = [ Super-index ] { Index-entry }

Super-index = “superindex” “is” “in” file-spec

Index-entry = File-entry | Component-entry

File-entry = Unit-name Entry-Type “is” “in” file-spec

Entry-type = “auxindex” | “main_program” | Specification | “body” | “subunit”

Specification = “specification” | “spec”

Unit-name = Ada-unit-name

Component-entry = Unit-name “components” “are” Component-list

Component-list = Unit-name { “,” Unit-name }

Tokens, i.e. Ada-unit-names, file-specs, the words “superindex”, “auxindex”, “main_program”, “specification”, “spec”, “body”, “subunit”, “is” and “in”, may not contain spaces or line breaks. Tokens may be separated by any number of spaces, tabs and/or line breaks. This means that an Index-entry may be broken over any number of lines and blank lines are ignored.

The token “spec” is a permitted abbreviation of “specification”.

Comments begin with “--” and are terminated by a line break. A comment is treated as though it were a line break. Hence comments may be placed anywhere in the index file that a line break is allowed, except that there must be at least one space or line-break separating the end of a file-spec from the start of a comment.

4.2.2           Super-index files

An index file may specify a Super-index which the Examiner will refer to if it fails to locate an entry for the unit it is looking for in the current index files. Super-index files can be used to create a hierarchical index file structure. This is discussed further in section 4.2.5.

An index file may specify at most one super-index. If a super-index is specified it must be the first entry in the index file, eg:

   superindex is in proj_dir/project.idx
   ...

The Index-entries in the super index file are regarded as being at a higher index file level to the current index file.  Each Super-index introduces a new higher level as described in more detail in section 4.2.4.

4.2.3           Auxiliary index files

An auxiliary index file can be used to group together all the index entries for a package and its children and its separate subunits. An auxiliary index may only contain references to a single unit and its components: this includes both the specification and body of the unit and any units descended from it. An auxiliary index file (identified by an auxindex entry) cannot contain a Super-index but it may include further auxindex entries (providing their Unit-name is an extension of their parent’s prefix) allowing the definition of a tree of auxiliary index files.  The root of the tree is an auxindex entry with a prefix of a unit name. An Index-entry for a unit with a matching prefix is then potentially defined in an auxiliary index file within the tree. 

For example the file example.idx contains the auxindex entry with the prefix A. An Index-entry for a unit named A or A.B or A.B.C, etc., is potentially defined with the tree rooted at A.  In the example below a unit named A may be found in the auxiliary index file a_aux.idx.  Index-entries for units named A.B and A.B.C may be defined further down the tree starting at node A.B  defined by the auxindex entry in subordinate index file a_aux.idx.

 

example.idx

A auxindex is in a_aux.idx

P specification is in p.ads

P body is in p.adb

Q specification is in q.ads

 

    a­_aux.idx

    A specification is in a.ads

A.B auxindex is in a_b_aux.idx

    A.C auxindex is in a_c_aux.idx

    A.D specification is in a-d.ads

    A.D body is in a-d.adb

    A.Q.R specification is in a-q-r.ads

As can be seen from the example auxiliary files can have many entries and so the auxiliary index tree may be as shallow or deep as desired.

Auxiliary index files do not introduce a new index file level. They are a structuring mechanism and all auxiliary files in a tree are at the same level as the root of the tree.  Trees of auxiliary indices may occur at different levels by adding auxindex entries to different super index files.

4.2.4           Unit lookup rules

Each entry of an index file is considered to be defined at a particular level as shown in Figure 2.

 

 

 

 

 

 

 

 

 

 


Figure 2 Index file levels

The source files specified on the Examiner command line or in a meta-file are considered to be at level 0.  The entries given in the in the index file specified on the command line or in the SPARK switch file are at level 1.  If an index file at level ‘n’ contains a Super-index then entries in the file specified in the Super-index are at level ‘n + 1’.

At any particular index file level a unit required for analysis may be matched directly with a File-entry by a case insensitive comparison with a Unit-name or the Unit-name given in an auxindex entry may be a prefix of the required unit.  If an auxindex entry has Unit-name that is a prefix of the required unit the search for a match is also attempted within the specified auxiliary index file.  If an auxiliary index file itself contains auxindex entries that have a prefix of the required unit the search continues within the auxiliary index files specified in the contained auxindex entries.

When a unit is to be located the following rules are applied:

1         Index file level 0 is considered to be the source files specified on the Examiner command line or in a meta-file.

2         Index file level 1 is the index file specified on the Examiner command line or in the switch file.

3         A Super-index file defined in an index file at level ‘n’ introduces a new index file at level ‘n + 1’

4         A contradictory entry is a File-entry with the same Unit-name and the same Entry-type as another entry but with a different file-spec, or a Component-Entry with same Unit-name as another Component-entry but with a different Component-list, and both entries are at the same index file level.  Contradictory entries are not permitted.

5         A search for a unit will start at level 0 and continue to successively higher levels until a match is found or there are no further levels.

The Examiner will terminate immediately with an error message if a misplaced or multiple Super-index is found in an index file, a contradictory Index-entry is encountered or if syntax errors are detected in an index file.  Additionally the Examiner will issue a warning message (suppressible with the warning control file keyword index_manager_duplicates) if two identical Index-entries are found anywhere in the index file hierarchy as such duplicate Index-entries are superfluous and may indicate a mistake.

A consequence of the rules is that the ordering of auxindex entries in an index file is unimportant and, furthermore, the search continues even if a match is not found within the tree of auxiliary files based at an auxindex entry.

4.2.5           Index file hierarchies

Since an index file that is a super-index file may also contain a super-index a hierarchy of index files may be constructed.  Following from the unit lookup rules an Index-entry at a lower index file level hides an Index-entry with the same Unit-name and Entry-type (if it is not a Component-entry) at a higher level.  The hiding of Index-entries at a higher index file level can be useful, for example, to make use of development version of a unit maintained in a local directory rather than the stable project level version as shown below:

 

 

proj_dir/project.idx

A spec is in proj_dir/a.ads

B spec is in proj_dir/b.ads

  …

P spec is in proj_dir/p.ads

 

local.idx

superindex is in proj_dir/project.idx

-- Use project index file for all references except for those

-- to the package under development, P.

P specification is in my_dir/p.ads

P body is in my_dir/p.adb

4.2.6           Private child packages in index files

Unit names that represent children are expressed in the normal way, for example

Autopilot.Altitude specification is in altitude_

states that the specification of package Altitude, a child of package Autopilot, is in the file named altitude_.ada.

The model of private child packages in SPARK is that they behave rather like embedded packages and so their “own variables” must appear as refinement constituents in the parent body.  Private children are thus regarded as components of their parent.  In the general case, this rule also extends to the public descendants of private children.  Therefore, before any package body is examined, the specifications of any private children of the package (and any public descendants of those children) must first be examined.

So that the Examiner can locate these other packages without the user providing them explicitly on the command line, a Component-entry, may be placed in an index file.  As described in the earlier grammar, a Component-entry states the names of the components of a given package in a Component_list. For example,

Autopilot components are Autopilot.Altitude,
                         Autopilot.Heading

states that the private children of package Autopilot are called Autopilot.Altitude and Autopilot.Heading.  Other index entries would then be used in the normal way (as in the previous example above) to locate the relevant source files.

4.2.7           Example index files

queue.idx

SuperIndex                    is in library

QueueOperations specification is in queue_

QueueOperations body          is in queue

Stacks          auxindex      is in stacks

stacks.idx

Stacks          specification is in stacks_

Stacks          body          is in stacks

Stacks.Pop      subunit       is in stackpop

Stacks.Push     subunit       is in stackpush

In the above example queue_ and queue refer to files queue_.ada and queue.ada which contain SPARK source code (note that file names are case sensitive on some systems). The name stacks refers to file stacks.idx which is another (auxiliary) index file. Finally, library refers to a further (super) index file library.idx in which the search will recommence if a required unit cannot be found in the index file supplied and the auxiliary indexes specified within it.

4.3               The meta file

A meta file is a list of filenames: this allows the user to specify that this list of files is to be Examined, simply by naming the meta file (preceded by the @ character) on the command line. The files specified in this way behave exactly as if they had been specified on the command line, so a listing file is produced for each (unless specifically suppressed). This method allows the repeated Examination of groups of files without the need to specify them on the command line. The default extension for a meta file name is .smf.   Meta files can include Ada-style comments.

4.3.1           File format

Meta-file = Argument-list

Argument-list = Argument { Argument }

Argument = ( File-spec [ Listing-option ] [ “-vcg” ] ) | Meta-file-spec

Listing-option = ( “-listing_file=“ file-spec | “-nolisting_file” )

Meta-file-spec = “@”file-spec

Note that the format for the arguments is exactly as on the command line, except that here they may be separated by line breaks. Note also that meta files can be nested.

4.3.1.1        Meta file per-file options

It is possible to specify a few per-file options in a meta file. Any option given applies only to the preceding file. They are:

·          –listing_file=FILE and –nolisting_file
These control if listing files are generated.

·          –vcg
This option causes VCs to be generated for the file. This option may be useful for projects where only some files are subject to proof.

4.3.2           Example of using a meta file

A meta file could be used as an alternative to an index file to specify common sets of files to be analysed. The following hypothetical example would cause the Examiner to process the files specified in common.smf followed by those in myunit.smf and then main.ads and main.adb (again, note that file names are case sensitive on Unix systems). It would not produce listing files for any of the files specified in common.smf but would produce listings with the extension .lst for all the other files. Additionally VCs will be generated for stack.adb, but not for any other file.

spark @common.smf @myunit.smf main.ads main.adb

common.smf

constants.adb -nolisting

io.adb -nolisting

library.adb -nolisting

myunit.smf

stack.adb -vcg

queue.adb

4.4               The target compiler data file

This file allows various implementation dependent values to be supplied to the Examiner. The availability of these makes significant improvements to the generation and discharge of VCs associated with Run-time Checks; there is also a positive impact on some wellformation checks. The use of the target compiler data file is mutually exclusive with the use of the target configuration file.

It is only possible to supply values for  Integer'first,  Integer'last (and their Long_Integer equivalents): the Examiner will also deduce the values of  Positive'last and  Natural'last from Integer'last. The use of the target compiler data file is now deprecated, and it is recommended that new users use the target configuration file detailed at 4.5.

The format of each line of the file is:

line = typemark’attribute_name “=” integer | based_literal

typemark = “integer” | “long_integer”

attribute_name = “first” | “last”

Note that the lines in the file do NOT end with semicolons.

4.4.1           Example

integer'first = -32768

integer'last  =  32767

4.5               The target configuration file

This file serves a similar purpose to the target data file, in that it allows implementation dependent values to be supplied to the Examiner. However, it is a significantly more general mechanism, and has a greater positive impact on the generation and simplification of VCs, as well as static semantic checking.  The use of the target configuration file is mutually exclusive with the use of the target data file. The use of the target configuration file is recommended.

NB. If the target configuration file is specified on the command line, but cannot be found by the Examiner, then analysis of the source files will not be carried out, and a warning will be emitted.

4.5.1           Syntax

The format of the file resembles a number of SPARK package specifications, concatenated in one file. The grammar of the file is as follows:

config_file = config_defn { , config_defn }

config_defn = “package” package_name “is” [seq_defn] “end” package_name “;”

seq_defn = defn { , defn }

defn = fp_type_defn | int_type_defn | int_subtype_defn | fp_const_defn | int_const_defn |

private_defn | typed_const_defn

fp_type_defn = “type” fp_type_name “is” “digits” int_literal “range” fp_literal .. fp_literal “;”

int_type_defn = “type” int_type_name “is” “range” int_expr “..” int_expr “;”

private_defn = “type” private_type_name “is” “private” “;”

int_expr = un_exp_part [ add_sub int_literal ]

add_sub = “+” | “-“

un_exp_part = [ “-“ ] exp_part

exp_part = int_literal “**” int_literal |  int_literal

int_subtype_defn = “subtype” int_subtype_name “is” simple_name “range” int_expr “..” int_expr “;”

fp_const_defn = fp_const_name “:” “constant” “:=” fp_literal “;”

int_const_defn = int_const_name “:” “constant” “:=” int_expr “;”

typed_const_defn = const_name “:” “constant” type_mark “:=” identifier “;”

int_literal = <a valid SPARK integer literal>

fp_literal = <a valid SPARK floating point literal>

Although the format of the target configuration file resembles a SPARK source file, it is important to note that it is not, and that only items from the above grammar are allowed: in particular, the range of acceptable expressions is much more limited. Standard Ada comments may also be used.

4.5.2           Example

The following is an example target configuration file for GNAT Pro 3.14a1 under Win32.

package Standard is

 

   type Integer is range -2**31 .. 2**31-1;

   type Short_Short_Integer is range -2**7 .. 2**7-1;

   type Short_Integer is range -2**15 .. 2**15-1;

   type Long_Integer is range -2**31 .. 2**31-1;

   type Long_Long_Integer is range -2**63 .. 2**63-1;

 

   type Short_Float is digits 6 range -3.40282E+38 ..  3.40282E+38;

   type Float is digits 6 range -3.40282E+38 ..  3.40282E+38;

   type Long_Float is digits 15

     range -1.79769313486232E+308 ..  1.79769313486232E+308;

   type Long_Long_Float is digits 18

     range -1.18973149535723177E+4932 ..  1.18973149535723177E+4932;

 

end Standard;

 

package System is

   type Address is private;

 

   Storage_Unit : constant := 8;

   Word_Size : constant := 32;

   Max_Int : constant := 2**63-1;

   Min_Int : constant := -2**63;

   Max_Binary_Modulus : constant := 2**64;

 

   Max_Base_Digits : constant := 18;

   Max_Digits : constant := 18;

 

   Fine_Delta : constant := 1.0842E-19;

   Max_Mantissa : constant := 63;

 

   subtype Any_Priority is Integer range 0 .. 31;

   subtype Priority is Any_Priority range 0 .. 30;

   subtype Interrupt_Priority is Any_Priority range 31 .. 31;

 

   Default_Bit_Order : constant Bit_Order := Low_Order_First;

 

end System;

4.5.3           Legality rules

The Examiner supports ‘package’ specifications in the target configuration file. In SPARK95 mode, both package Standard and package System may be specified.  In addition, if the Ravenscar Profile is selected, package Ada.Real_Time may be specified.  In SPARK83 mode, only package Standard is allowed. The packages may appear in any order.

In package Standard, the following types may be declared:

·          Integer;

·          Float;

·          Any signed integer type which has ‘_Integer’ as a suffix; and

·          Any floating point type which has ‘_Float’ as a suffix.

In package System, the type Address may be declared, as may the following subtypes:

·          Any_Priority;

·          Priority; and

·          Integer_Priority.

The type System.Bit_Order is implicitly declared, along with the appropriate enumeration literals (Low_Order_First and High_Order_First) and System.Default_Bit_Order as a deferred constant. It is possible to override the predefined declaration with a statement as seen in the example above. The only legal const_name is Default_Bit_Order, the only legal type_mark is Bit_Order, and the only legal identifiers are Low_Order_First and High_Order_First .

Additionally, the following named numbers may be defined:

·          Storage_Unit;

·          Word_Size;

·          Max_Binary_Modulus;

·          Min_Int;

·          Max_Int;

·          Max_Digits;

·          Max_Base_Digits;

·          Max_Mantissa; and

·          Fine_Delta.

In package Ada.Real_Time, the only allowed declaration is for type Seconds_Count.

4.5.4           Static semantics

The ‘package’ specifications are expected to accord to the normal SPARK rule that the package name be specified in the ‘end’ clause. All declarations within the packages are optional, subject to any additional rules below.

In package Standard, type Integer and all ‘_Integer’ types are expected to be signed integer types. Similarly, type Float and all ‘_Float’ types are expected to be constrained floating point types.

In package System, which may only be included in SPARK95 mode, the following well-formedness checks apply:

·          Type Address must be private. The declaration of type Address implicitly declares a deferred constant System.Null_Address of type Address;

·          Storage_Unit, Word_Size, Max_Binary_Modulus, Min_Int, Max_Int, Max_Digits, Max_Base_Digits and Max_Mantissa are all expected to be named integers.

·          Fine_Delta is expected to be a named real.

·          Subtype Any_Priority is expected to have Integer as its parent type; additionally, if Any_Priority is specified, both Priority and Interrupt_Priority must also be specified.

·          Subtypes Priority and Interrupt_Priority are expected to have Any_Priority as their parent types. The range of Priority must include at least 30 values. The declaration of subtype Priority implicitly defines a constant Default_Priority of type Priority.

·          The following relations must hold between Any_Priority, Priority and Interrupt_Priority:

ľ       Any_Priority’First = Priority’First;

ľ       Any_Priority’Last = Interrupt_Priority’Last;

ľ       Priority’Last + 1 = Interrupt_Priority’First.

·          Max_Binary_Modulus must be a positive power of 2.

In addition, standard SPARK rules on redeclaration of existing identifiers, empty ranges and legality of subtype ranges apply.

4.5.5           Configuration file generator

An Ada source file is distributed with the Examiner which will automatically generate a valid target configuration file when compiled and run; it is named confgen.adb and will be located in the same directory as the Examiner binary. Please note that the resultant configuration file will only be valid for SPARK95 usage, since it includes package System. The package specifications are generated on the standard output, and can be redirected to a file in the normal fashion.

The configuration file generator will probably require some minor modification (depending on whether the target compiler supports Long_Integer, for example), so it is suggested that the user inspect the source code before use.

NOTE: In an environment where both host and target cross compilers are being used, it is very important that the configuration file is valid for the final target computer (i.e. using the cross compiler), not the host compiler.

4.6               The warning control file

The Examiner generates semantic warnings when certain program constructs are found. The warnings are raised because the Examiner identifies properties of the SPARK source which, although not illegal, may be of interest to the user or because certain language features could be used in a manner that changes the meaning of a program in ways that the Examiner could not detect. For example, representation clauses could be used to cause two variables to overlap so that changing one also changed the other. The warnings generated are listed in sections 6.3 and 6.4.

The warning control file is described in more detail in section 9.2.

4.7               VC and DPC output file structure

The output files from the VC and DPC generator are produced in a directory tree structure, as follows.

For the remainder of this section, all comments regarding VCs also apply to DPCs – the directory and file naming is the same, with only the file extensions changing.

A tree of subdirectories is created beneath the current directory with one level for each level of embedding of scopes present in the code from which VCs are being generated. Within these directories the output files take their name from the name of the subprogram from which they were generated. The names of these directories and files are also transformed into lower-case. Thus VCs for subprogram P within package K will appear in ./k/p.vcg[1], while those for subprogram Q embedded directly within P will appear in ./k/p/q.vcg.

The following short example illustrates the process.

package body P

is

  procedure Double(X : in out Integer)

  --# derives X from X;

  --# post X = X~ * 2;

  is

    function Sum(A, B : Integer) return Integer

    --# return A + B;

    is

    begin

      return A + B;

    end Sum;

 

  begin --Double

    X := Sum(X, X);

  end Double;

end P;

This would produce the following directories and files if VCs were generated:

·          A directory called p containing a directory called double and files double.fdl, double.rls and double.vcg

·          Files sum.fdl, sum.rls and sum.vcg within directory p/double.

VCs generated from subprograms in child packages appear in a similar directory sub-tree rooted at a directory with the name of the parent package followed by a single underbar.  For example, generation of VCs from:

package Parent.Child

is

  procedure Inc(X : in out Integer)

  --# derives X from X;

end Parent.Child;

would result in a directory called parent_ containing a directory called child containing files inc.fdl, inc.rls and inc.vcg.

If the length of any file name exceeds the maximum length permitted, then warning 406 (see section 10.2) is raised and no VCs are generated.

4.8               HTML Output

The Examiner can produce browsable HTML output files that make inspection of the Examination results much easier. The output is compliant with the transitional HTML 4.0 specification and can be viewed in any HTML 4.0 browser. HTML generation is invoked using the -html switch described in the table at section 3.1.3.

4.8.1           Files Generated

In addition to the usual output, the Examiner produces the following files during HTML generation:

spark.htm            This is the starting point for viewing the HTML output.

spark_rep.htm       The HTML version of the Examiner's report file.  The file's name is based on the name of the report file and will change if a different report file name is used.

<file>_lst.htm      For each file specified on the command line or in a meta-file for Examination, a listing file is produced.  The Examiner produces an HTML version of the listing file also.  The name of the HTML listing file is based on the name of the plain-text listing file; therefore using the -listing_extension option will change the names of the files.  Suppression of listing file generation with the -nolisting option will suppress generation of the HTML listing file also.

errors.htm            This file contains explanations of error messages and is referenced by the HTML versions of the report file and listing files.

blank.htm                           This file ensures compatibility with all browsers.  This is the default file for the bottom frame, as displayed when spark.htm is opened.

4.8.2           Browsing the Report File

The Examiner generates HTML output in a subdirectory of the [current working/default] directory. The name of this directory can be specified on the command-line as a parameter to the html switch. If no directory name is given, then “HTML” is used. If this directory does not exist, it is created by the Examiner. The HTML output can be viewed by opening the file "spark.htm" from the HTML subdirectory in an HTML browser.

This file splits the browser window into two frames.  The top frame contains the report file, which is an HTML version of the report file described in section 4.1 and acts as a navigation tool.  Following links in the report file opens files in the bottom frame for viewing.  Figure 3 shows the report file in the top frame and a listing file being viewed in the bottom frame.

Figure 3 HTML Output frames

4.8.2.1        Option links

The report file contains links to any files specified on the command-line.  Figure 4 shows the links that are available in this section of the report file.  Figure 4 also shows that links across physical devices are marked as unavailable.  This is because the links cannot be guaranteed to be available when browsing.

In Figure 4, the warning control file link has been selected and is displayed in the bottom frame.

Figure 4 HTML Output option links

4.8.2.2        File links

The "Selected Files", "Index Filename(s)" and "Meta Files" sections of the report file contain links to the files that are referenced.

The "Source Filename(s)" section also contains links to the source files that are referenced.  Alongside there are links to the analysis sections of the report file.  Clicking on "[View analysis]" will take you to the report of the analysis of that file (see Figure 5).

Figure 5 HTML Output file links

4.8.2.3        Analysis section links

The analysis section links are also shown in Figure 5.  The source filename and listing filename are referenced and there is also a link to the HTML version of the listing file.

4.8.2.4        Error links

If the analysis section of the report file shows that errors were found, the error reports also have useful links.

The error message itself is linked to an explanation of the error, displayed in the bottom frame (see Figure 6).

Clicking on the line number of the error will display that line in the listing file in the bottom frame (see Figure 7).  Note that the line number links will not work if no listing file was generated.

Figure 6 HTML Output error message links

Figure 7 HTML Output error links

4.8.2.5        Listing File Links

The listing file also contains links from error messages to their explanations, as shown in Figure 7.

4.9               SLI files

The Examiner generates SLI files by default for cross navigation in an IDE (eg GPS or Emacs) or gnatfind. For each SPARK source file that is analysed the Examiner now generates a corresponding SLI file with the extension “.sli”. A new switch ‘nosli’ has been added in order to suppress generation of SLI files if required.

It should be noted that the SLI files complement, but do not replace, the ALI files generated by the compiler for Ada navigation. If the GNAT compiler is installed, the ALI files are generated as part of the build process (Build->Project->Build All in GPS). Otherwise, they can be generated via Build->Recompute Xref info.

4.10          Output Directory Control

The “output_directory” option allows control of the directory where the Examiner’s report file, listing files, SLI files, VCs and DPCs are generated.

By default, and in the absence of this option, these files are generated in and below the Examiner’s current working directory.

This option may either specify an absolute path-name or a name that is relative to the current working directory. In either case, the indicated path-name must denote an existing directory, otherwise the Examiner will immediately terminate with the message

Cannot find or write to output directory

4.10.1       Interaction with listing and report file options

The “listing_extension” option may still be used to control the generation of listing file names. Similarly, the “listing_file” option (either on the command-line or in a meta-file) may still be used to individually control the listing file-name for a particular source file. If the latter option specifies a full absolute path-name, then this completely overrides the indicated output directory.

Similarly, the “report_file” option may be used to specify the name of the Examiner’s report file within the indicated output directory. If a full absolute path-name is given, then this completely overrides the indicated output directory.

4.10.2       Interaction with HTML generation

The “html” option operates completely independently of the “output_directory” option. As such, HTML may be generated in a completely different directory, the same directory, or a sub-directory of the chosen output directory and so on.

4.10.3       Interaction with VC and DPC generation

When an “output_directory” is specified, all VCs and DPCs are generated in subdirectories of that directory, using the scheme described in section 4.7. These sub-directories are created by the Examiner as required.

4.10.4       Interaction with meta-files

With a single meta-file, or a set of meta-files in a single directory are used, output files are generated as described above.

Where meta-files are nested and appear in multiple directories, the Examiner does not attempt to re-create the source sub-directory structure below the specified output directory. This is consistent the Examiner’s existing behaviour when an output_directory is not specified.

 

5                       Lexical and syntactic analysis

5.1               General description

The Examiner employs an LALR parsing mechanism. It is similar to a conventional LALR parser, except that it employs explicit shift and reduce tables. The tables have been generated explicitly to facilitate the correctness proof of the parser itself.

5.2               Error messages

The Examiner employs a uniform method of error reporting, in terms of the syntactic entities (terminal and non-terminal symbols) of the SPARK grammar. Reserved words, operators and punctuation marks in syntax error messages are enclosed in quotes.

In almost all respects the grammar employed by the Examiner is the same as that given in the SPARK Definition; however, some minor transformations have been applied, to reduce the original grammar to LALR form. The messages given are nevertheless mostly quite comprehensible. The definitive grammar for any particular release of the Examiner can be found in the SPARK Pro sources in the file examiner/SPARK.LLA.

Syntax error messages occur in one of the following forms. In these examples, NON_TERMINAL_A and NON_TERMINAL_B represent non-terminal symbols such as PROCEDURE_SPECIFICATION, while TERMINAL_A and TERMINAL_B represent a terminal symbol such as IDENTIFIER or a Reserved Word.

5.2.1           TERMINAL_A {or TERMINAL_B} expected.

The parser is in a state where exactly one of the set of terminal symbols {TERMINAL_A, TERMINAL_B, …} is expected next in the input file.

Common examples:

***        Syntax Error : ";" expected.

If this is reported at the end of the input file it may well  be caused by the misspelling of an identifier in a hide directive.  The parser then skips all the following text looking for the  misspelled identifier but finds the end of file first where it  reports a syntax error.

***        Syntax Error : reserved word "INHERIT" expected.

This occurs where the annotation on a subprogram body is placed after the reserved word is instead of before it.

5.2.2           No NON_TERMINAL_A can start with TERMINAL_A.

Reported when the parser is in a state expecting NON_TERMINAL_A, but it receives a TERMINAL_A which cannot possibly be the start of a NON_TERMINAL_A.

Common examples:

***        Syntax Error : No APRAGMA can start with reserved word "IS".

 

This can occur when a stub for an embedded subprogram is wrongly terminated by a semicolon.

5.2.3           Neither NON_TERMINAL_A nor NON_TERMINAL_B can start with TERMINAL_A.

Reported when the parser is in a state expecting one of the non-terminals NON_TERMINAL_A or NON_TERMINAL_B, but it receives a TERMINAL_A which cannot possibly be the start of a NON_TERMINAL_A or NON_TERMINAL_B.

5.2.4           TERMINAL_A {, TERMINAL_B} or start of NON_TERMINAL_A {or NON_TERMINAL_B} expected.

Issued when the parser is expecting one of TERMINAL_A or TERMINAL_B, or the start of a NON_TERMINAL_A or NON_TERMINAL_B, but the next token cannot be correct.

5.2.5           TERMINAL_A cannot be followed by TERMINAL_B here.

Issued when a particular sequence of terminal symbols TERMINAL_A, TERMINAL_B cannot be legal.

5.2.6           No complete NON_TERMINAL_A can be followed by TERMINAL_A here.

Issued when a particular NON_TERMINAL_A has been parsed successfully, but the next token TERMINAL_A cannot be correct.

Common Examples:

***        Syntax Error : No complete PROCEDURE_SPECIFICATION can be followed by ANNOTATION_START here.

This can occur when the reserved word body has been omitted from the declaration of a package body. This error will occur at the annotation placed between the specification and the reserved word is of the first subprogram.

***        Syntax Error : No complete PROCEDURE_SPECIFICATION can be followed by reserved word "IS" here.

This can occur when the reserved word body has been omitted from the declaration of a package body. This error will occur at the reserved word is which introduces the body of the first subprogram.

***        Syntax Error : No complete SIMPLE_EXPRESSION can be followed by ")" here.

This can occur in an aggregate expression when there is a mixture of named and positional association being used.

***        Syntax Error : No complete SIMPLE_EXPRESSION can be followed by "," here.

This can occur in an aggregate expression when there is a mixture of named and positional association being used.

5.3               Reserved words

In addition to the reserved words of Ada, certain additional words listed below are reserved words of SPARK. These additional reserved words are associated with SPARK's core annotations and proof contexts.

 

assert        check       derives        from

global        hide        hold           inherit

initializes   invariant   main_program   own

post          pre         some

 

In addition to these reserved words users may not use certain other identifiers which are reserved for use in FDL (Functional Description Language) which is the underlying logic in which theorems about SPARK programs are expressed. Use of these identifiers as Ada identifiers would lead to ambiguous and badly typed VCs. The predefined FDL identifiers involved are listed below. A complete list of the reserved words of SPARK and FDL is also given in the SPARK Definition.  Note that this restriction can be avoided by use of the -fdl_identifiers=accept command line switch (see Section 3.1) although doing so will prevent use of the Examiner's VC generation capabilities.

 

are_interchangeable abstract             as

assume              const                div

element             finish               first

for_all             for_some             goal

last                may_be_deduced       may_be_deduced_from

may_be_replaced_by  nonfirst             nonlast

not_in              odd                  pending

pred                proof                real

requires            save                 sequence

set                 sqr                  start

strict_subset_of    subset_of            succ

update              var                  where

 

For backward compatibility with earlier definitions of the SPARK language the reserved word "some" is controlled by the command line option "fdl_identifiers".  If  "fdl_identifiers=accept" is selected then "some" may be used as a normal identifier.

"abstract" is a reserved word of SPARK 95.  For SPARK 83 it is predefined FDL identifier which can be controlled by the "fdl_identifiers" command line option.

In addition identifiers beginning with the character sequences fld_ or  upf_ are also regarded as predefined FDL identifiers.

6                       Static semantic analysis

6.1               General description

This analysis involves checking that a text obeys the static-semantic rules of SPARK (other than rules relating to control-, data- and information-flow, which are discussed in chapters 7 and 8).

Where error messages contain references to other documents, such as the SPARK report, the full title of the references used will appear in the report file.

6.2               Error messages

The following section explains error messages, specifically relating to SPARK restrictions, the incorrect use of annotations, or inconsistencies between annotations and executable code. A numeric error code is given in the message, the error messages are presented in numerical order. An explanation for each error message is given except where the message text is self-explanatory


 

***        Semantic Error              :1: The identifier YYY.XXX is either undeclared or not visible at this point.

If the identifier is declared in a separate (or parent) package, the package must be included in an inherit clause and the identifier prefixed with the package name. Ensure that there are no errors in the declaration of the identifier.

***        Semantic Error              :2: XXX does not denote a formal parameter for YYY.

***        Semantic Error              :3: Incorrect number of actual parameters for call of subprogram XXX.

***        Semantic Error              :4: More than one parameter association is given for formal parameter XXX.

***        Semantic Error              :5: Illegal use of identifier XXX.

Usually associated with the use of an identifier other than a package name as a prefix in a selected component.

***        Semantic Error              :6: Identifier XXX is not the name of a variable.

***        Semantic Error              :7: Identifier XXX is not the name of a procedure.

***        Semantic Error              :8: There is no field named XXX in this entity.

Issued when the selector in a selected component of a record references a non-existent field.

***        Semantic Error              :9: Selected components are not allowed for XXX.

Occurs if the prefix to a selected component representing a procedure in a procedure call statement or a type mark is not a package. Also occurs if a selector is applied in an expression to an object which is not a record variable.

***        Semantic Error              :10: Illegal redeclaration of identifier XXX.

***        Semantic Error              :11: There is no package declaration for XXX.

Issued if a package body is encountered for which there is no package specification.

***        Semantic Error              :12: Own variable XXX can only be completed by a variable declaration, not a constant.

If the object in question is really a constant, then remove it from the enclosing package’s own variable annotation.

***        Semantic Error              :13: A body for subprogram XXX has already been declared.

***        Semantic Error              :14: Illegal parent unit name.

Issued if the name in a "separate" clause of a subunit does not correctly identify a compilation unit.  Common causes of this error are a syntax error in the parent unit or omitting the parent unit specification and/or parent unit body entries from the index file.

***        Semantic Error              :15: The stub for XXX is either undeclared or cannot be located.

Common causes of this error are an error in the declaration of the stub or the omission of the parent unit body from the index file.

***        Semantic Error              :16: A body for package XXX has already been declared.

***        Semantic Error              :17: A body stub for package XXX has already been declared.

***        Semantic Error              :18: Identifier XXX is not the name of a package.

***        Semantic Error              :19: Identifier XXX is not the name of a procedure.

***        Semantic Error              :20: Illegal operator symbol.

Issued if a renaming declaration contains a non-existent operator.

***        Semantic Error              :21: This entity is not an array.

Issued if an attempt is made to index into a name which does not represent an array.

***        Semantic Error              :22: The type in this declaration is not consistent with the previous declaration of XXX.

Occurs when the type given in the Ada declaration of an own variable differs from that "announced" in the package's own variable clause.

***        Semantic Error              :23: No parameter association is given for formal parameter XXX.

***        Semantic Error              :24: The identifier XXX (exported by called subprogram) is not visible at this point.

When a procedure is called any global variables exported by that procedure must be visible at the point of call. This error message indicates that the global variable concerned is not visible. It may be that it needs to be added to the global annotation of the procedure containing the call (or some further enclosing subprogram) or it may be that an inherit clause is missing from the package containing the call.

***        Semantic Error              :25: The identifier XXX (imported by called subprogram) is not visible at this point.

When a procedure is called any global variables imported by that procedure must be visible at the point of call. This error message indicates that the global variable concerned is not visible. It may be that it needs to be added to the global annotation of the subprogram containing the call (or some further enclosing subprogram) or it may be that an inherit clause is missing from the package containing the call.

***        Semantic Error              :26: The deferred constant XXX does not have an associated full definition.

Issued at the end of a package specification if no full declaration has been supplied for a deferred constant declared in the package specification.

***        Semantic Error              :27: The private type XXX does not have an associated full definition.

Issued at the end of a package specification if no full declaration has been supplied for a private type declared in the package specification.

***        Semantic Error              :28: The own variable XXX does not have a definition.

Issued at the end of a package body if an own variable announced in the package specification has neither been given an Ada declaration nor refined.

***        Semantic Error              :29: The subprogram XXX, declared in the package specification, does not have an associated body.

***        Semantic Error              :30: Attribute XXX is not yet implemented in the Examiner.

The attribute is identified in Annex K of the SPARK 95 report as a valid SPARK 95 attribute but the Examiner does not currently support it. Please contact Altran Praxis if use of this attribute is important to your project. It is possible to work round the omission by putting the use of the attribute inside a suitable function which is hidden from the Examiner.

***        Semantic Error              :31: The prefix of this attribute is not an object or type.

***        Semantic Error              :32: Illegal type conversion.

Likely causes are type conversions involving record types or non-convertible arrays.

***        Semantic Error              :33: Illegal aggregate.

Issued if the prefix of an aggregate is not a composite type.

***        Semantic Error              :34: Illegal procedure call.

Issued if a call is made to a user-defined subprogram in a package initialization part.

***        Semantic Error              :35: Binary operator is not declared for types XXX and YYY.

Indicates use of an undeclared binary operator; this message means that the type on each side of the operator cannot appear with the operator used. e.g. attempting to add an integer to an enumeration literal.

***        Semantic Error              :36: Expression is not static.

***        Semantic Error              :37: Expression is not constant.

***        Semantic Error              :38: Expression is not of the expected type.

***        Semantic Error              :39: Illegal use of unconstrained type.

An unconstrained array type or variable of such a type is illegally used. Use of unconstrained arrays in SPARK is limited to passing them as parameters, indexing into them and taking attributes of them.  This message also arises if a string literal is used as an actual parameter where the formal parameter is a string subtype. In this case, the error can be removed by qualifying the string literal with the subtype name.

***        Semantic Error              :40: Numeric or Time_Span type required.

This operator is only defined for numeric types and, if the Ravenscar Profile is selected, for type Ada.Real_Time.Time_Span.

***        Semantic Error              :41: Array type required.

Issued if a subtype declaration taking the form of a constrained subtype of an unconstrained array type is encountered but with a type mark which does not represent an array.

***        Semantic Error              :42: Incompatible types.

Issued when a name represents an object which is not of the required type.

***        Semantic Error              :43: Range is not constant.

***        Semantic Error              :44: Scalar type required.

The bounds of an explicit range must be scalar types.

***        Semantic Error              :45: Range is not static.

***        Semantic Error              :46: Discrete type required.

***        Semantic Error              :47: The definition of this type contains errors which may make this array definition invalid.

Issued if an array type definition is encountered where one or more of the index types used in the definition contained errors in its original declaration.  For example, SPARK requires array index bounds to be constant (known at compile time) so an attempt to use an illegal subtype with variable bounds as an array index will generate this message.

***        Semantic Error              :48: Subtypes of private types are not permitted.

Issued if an attempt is made to declare a subtype of a private type in a location where the full view of the type is not visible.

***        Semantic Error              :49: Attribute XXX takes only one argument.

Only SPARK 95 attributes 'Min and 'Max require two arguments.

***        Semantic Error              :50: Initializing expression must be constant.

To assign a non-constant expression to a variable, an assignment statement in the body of the program unit (following the 'begin') must be used.

***        Semantic Error              :51: Arrays may not be ordered.

Issued if an ordering operator such as "&lt;" is encountered between objects of an array type other than string or a constrained subtype of string.

***        Semantic Error              :52: Only Scalar, String and Time types may be ordered.

Ordering operators are only defined for scalar types and type String plus, if the Ravenscar Profile is selected, types Time and Time_Span in package Ada.Real_Time.

***        Semantic Error              :53: Illegal others clause.

In SPARK record aggregates may not contain an others clause.

***        Semantic Error              :54: Illegal attribute: XXX.

Issued when an attribute not supported by SPARK is used.

***        Semantic Error              :55: Attribute XXX takes no argument.

***        Semantic Error              :56: Argument expected.

***        Semantic Error              :57: Fixed type definition must have associated range constraint.

***        Semantic Error              :58: XXX expected, to repeat initial identifier.

Occurs at the end of a package, subprogram, protected type, task type or loop if the terminal identifier does not match the name or label originally given.

***        Semantic Error              :59: Composite subtype definition may not have associated range constraint.

A subtype of the form applicable to a subrange of a scalar type has been encountered but the type provided is not a scalar type.

***        Semantic Error              :60: Illegal choice in record aggregate.

In SPARK record aggregates may not contain multiple choices, each field must be assigned a value individually.

***        Semantic Error              :61: Illegal occurrence of body stub - a body stub may only occur in a compilation unit.

***        Semantic Error              :62: A body for the embedded package XXX is required.

Issued if an embedded package declares subprograms or own variables and no body is provided.

***        Semantic Error              :63: XXX is not a type mark.

***        Semantic Error              :64: Parameters of function subprograms must be of mode in.

***        Semantic Error              :65: Formal parameters of renamed operators may not be renamed.

The names of the parameters used in renaming declarations may not be altered from Left, Right for binary operators and Right for unary operators. These are the names given for the parameters in the ARM and the SPARK Definition requires that parameter names are not changed.

***        Semantic Error              :66: Unexpected package initialization - no own variables of package XXX require initialization.

Either the package does not have an initializes annotation or all the own variables requiring initialization were given values at the point of declaration.

***        Semantic Error              :67: Illegal machine code insertion. Machine code functions are not permitted in SPARK 83.

This is an Ada 83 rule.  Machine code can only be used in procedures.

***        Semantic Error              :68: Illegal operator renaming - operators are defined on types not subtypes.

Issued if an attempt is made to rename an operator using a subtype of the type for which it was originally implicitly declared.

***        Semantic Error              :69: pragma XXX has two parameters.

***        Semantic Error              :70: pragma Import expected.

***        Semantic Error              :70: pragma Interface expected.

***        Semantic Error              :71: This expression does not represent the expected subprogram or variable name XXX.

Issued if the name supplied in a pragma interface, import or attach_handler does not match the name of the associated subprogram or variable.

***        Semantic Error              :72: Unexpected pragma Import.

Pragma import may only occur in a body stub, or immediately after a subprogram declaration in the visible part of a package, or immediately after a variable declaration.

***        Semantic Error              :72: Unexpected pragma Interface.

Pragma interface may only occur in a body stub or immediately after a subprogram declaration in the visible part of a package.

***        Semantic Error              :73: XXX has already been declared or refined.

Issued if an Ada declaration is given for an own variable which has been refined, or in a refinement clause if an own variable is refined more than once.

***        Semantic Error              :74: XXX does not occur in the package own variable list.

A subject of a refinement definition of a package must be an own variable of that package.

***        Semantic Error              :75: Illegal use of inherited package.

Issued if an attempt is made to refine an own variable onto an own variable of a non-embedded package.

***        Semantic Error              :76: Identifier XXX is already declared and cannot be the name of an embedded package.

Issued when a refinement clause in a package body attempts to name an embedded package own variable as a refinement constituent and the name given for the embedded package is already in use.

***        Semantic Error              :77: Variable XXX should occur in this own variable clause.

Occurs in the own variable clause of a package embedded in another package if an own variable which is a refinement constituent of an own variable of the enclosing package is omitted.

***        Semantic Error              :78: Initialization of own variable XXX is ineffective.

Issued if an own variable occurs in the initialization clause of an embedded package and the own variable concerned is a refinement constituent of another own variable which is not listed in the initialization specification of its package.

***        Semantic Error              :79: Variable XXX should occur in this initialization specification.

Occurs in the initialization clause of a package embedded in another package if an own variable which is a refinement constituent of an initialized own variable of the enclosing package is omitted.

***        Semantic Error              :80: Unexpected own variable clause - no variable in this clause is a refinement constituent.

***        Semantic Error              :81: Own variable clause expected - own variables of this package occur as refinement constituents.

***        Semantic Error              :82: Unexpected initialization specification - no own variables of this package require initialization.

An own variable initialization clause and that of its refinement constituents must be consistent.

***        Semantic Error              :83: Initialization specification expected - own variables of this package require initialization.

Issued if an own variable does not occur in the initialization clause of an embedded package and the own variable concerned is a refinement constituent of another own variable which is listed in the initialization clause of its package.

***        Semantic Error              :84: The refinement constituent XXX does not have a declaration.

Issued at the end of a package if a refinement constituent of a refined own variable has not been given an Ada declaration or further refined.

***        Semantic Error              :85: XXX is not a constituent of any abstract own variable appearing in the earlier global definition for this subprogram.

A variable XXX which has occurred in a refined global annotation is neither a variable that occurred in the earlier global definition nor a refinement constituent of any such variable.

***        Semantic Error              :86: At least one constituent of XXX was expected in this refined global definition.

If the global annotation of a procedure specification contains an own variable and that own variable is later refined then at least one refinement constituent of the own variable shall appear in the second global annotation supplied for the procedure body.

***        Semantic Error              :87: Refined global definition expected for subprogram XXX.

A global definition containing abstract own variables was given in  the definition for subprogram XXX, in a package specification. A refined global definition is required in the package body.

***        Semantic Error              :88: Variable XXX is not a refinement constituent.

***        Semantic Error              :89: XXX is not a private type declared in this package.

***        Semantic Error              :90: This operator may not be applied to ranges.

***        Semantic Error              :91: Ranges may not be assigned.

***        Semantic Error              :92: Named association may not be used here.

***        Semantic Error              :93: Number of index expressions differs from number of dimensions of array XXX.

***        Semantic Error              :94: Condition is not boolean.

Issued anywhere a boolean expression is required (e.g. in if,  exit  and while statements) and the expression provided is not of type boolean.

***        Semantic Error              :95: Type mark expected.

***        Semantic Error              :96: Attribute XXX is not valid with this prefix.

***        Semantic Error              :97: Attribute BASE may only appear as a prefix.

'BASE may only be used as a prefix to another attribute.

***        Semantic Error              :98: This expression is not a range.

***        Semantic Error              :99: Unconstrained array expected.

Occurs if a subtype is declared of an array which is already constrained.

***        Semantic Error              :100: Floating point type mark expected.

***        Semantic Error              :101: Fixed point type mark expected.

***        Semantic Error              :102: This is not the name of a field of record XXX.

***        Semantic Error              :103: A value has already been supplied for field XXX.

***        Semantic Error              :104: No value has been supplied for field XXX.

***        Semantic Error              :105: More values have been supplied than number of fields in record XXX.

***        Semantic Error              :106: Range is not of the expected type.

***        Semantic Error              :107: Expression is not of the expected type. Actual type is XXX. Expected type is YYY.

***        Semantic Error              :108: Expression is not of the expected type. Expected any Integer type.

***        Semantic Error              :109: Expression is not of the expected type. Expected any Real type.

***        Semantic Error              :110: Use type clauses following an embedded package are not currently supported by the Examiner.

***        Semantic Error              :111: Package renaming is not currently supported by the Examiner.

***        Semantic Error              :112: A use type clause may not appear here.  They are only permitted as part of a context clause or directly following an embedded package specification.

***        Semantic Error              :113: Private subprogram declarations are not permitted in SPARK 83.

Private subprograms would not be callable in SPARK 83 and are therefore not permitted; they may be declared and called in SPARK 95.

***        Semantic Error              :114: Subtype mark or Range may not be used in an expression in this context.

A subtype mark or an explicit Range attribute may not be used in a context where a simple expression is expected.

***        Semantic Error              :115: In a package body, an own variable annotation must include one or more refinement constituents.

Annotation should be of the form 'own S is A, B, C;'.

***        Semantic Error              :116: View conversion to own type is not permitted in target of an assignment.

***        Semantic Error              :117: Aggregate must be qualified with subtype mark.

Aggregates are qualified expressions so they must be prefixed with a subtype mark. An exception is made in the case of aggregate assignments to unconstrained arrays as the rules of Ada do not permit unconstrained array aggregates to be qualified.

***        Semantic Error              :118: Aggregate assignment to unconstrained multi-dimensional array not permitted.

Unqualified aggregates may only be used in assignments to one-dimensional unconstrained arrays. SPARK does not permit aggregate assignment to multi-dimensional unconstrained arrays.

***        Semantic Error              :119: Unary operator is not declared for types XXX.

Indicates use of an undeclared unary operator; this message means that the type on the right hand side of the operator cannot appear with the operator used. e.g. attempting to negate an enumeration literal.

***        Semantic Error              :120: Pragma import not allowed here because variable XXX is already initialized.  See ALRM B.1(24).

***        Semantic Error              :121: 'Flow_Message' or 'Warning_Message' expected.

The identifier indicating what kind of message to justify must be either 'Flow_Message' or 'Warning_Message' or some unique abbreviation of them such as 'Fl' or even 'F'.  Case is ignored.

***        Semantic Error              :122: Error or warning number expected.

This item should be an integer literal representing the error or warning message that is being marked as expected.

***        Semantic Error              :123: This warning number may not appear in an accept annotation.

It does not make sense to allow certain warnings to be justified with the accept annotation. In particular, attempting to justify justify warnings raised by the justification system itself could lead to some special kind of recursive hell that we would not wish to enter.

***        Semantic Error              :124: Incorrect number of names in accept annotation: should be 0.

This class of error does not reference any variables, and therefore requires no names.

***        Semantic Error              :125: Incorrect number of names in accept annotation: should be 1.

This class of error references one variable, and therefore requires one name.

***        Semantic Error              :126: Incorrect number of names in accept annotation: should be 2.

This class of error references two variables, and therefore requires two names. Two names are need to justify expected information flow messages such as "X is not derived from Y". Note that for messages of this kind the accept annotation should list the names in the order "export, import".

***        Semantic Error              :127: Incorrect number of names in accept annotation: should be 0 or 1.

This class of error references either zero or one variable, and therefore requires either zero or one name. An ineffective assignment error requires the name of variable being assigned to. An ineffective statement error has no name associated with it.

***        Semantic Error              :128: Incorrect number of names in accept annotation: should be 1 or 2.

This class of error references either one or two variables, and therefore requires either one or two names. One name is required when the export is a function return value.

***        Semantic Error              :129: Assignment to view conversion is not currently implemented.

***        Semantic Error              :130: A type from the current package should not appear in a use type clause.

***        Semantic Error              :131: The package name XXX should appear in a with clause preceding the use type clause.

***        Semantic Error              :132: The unit name or the name of an enclosing package of the unit should not appear in its with clause.

A package should not 'with' itself and a subunit should not 'with' the package (or main program) which declares its stub.

***        Semantic Error              :133: Name in with clause is locally redeclared.

***        Semantic Error              :134: A package name should not appear in its own inherit clause.

***        Semantic Error              :135: The package XXX is undeclared or not visible, or there is a circularity in the list of inherited packages.

Possible causes of this error are an error in the inherited package specification or omitting an entry for the package specification from the index file or circular inheritance.

***        Semantic Error              :136: The own variable XXX is not declared in the own variable clause of the corresponding package declaration.

A refinement clause of a package body defines the constituent parts of own variables given in the own variable clause of the corresponding package declaration.

***        Semantic Error              :137: The child package XXX is either undeclared or not visible at this point.

Possible causes of this error are an error in the child package specification or omitting the child from the parent's component list in the index file or omitting the child specification entry from the index file.

***        Semantic Error              :138: Child package own variable XXX is does not appear in the own variable clause of the child package.

A constituent of a refinement clause which is defined in a child package must be an own variable of the child package.

***        Semantic Error              :139: The variable XXX is not declared in the own variable clause of this package.

A package can only initialize variables declared in its own variable clause.

***        Semantic Error              :140: The predecessor package XXX is either undeclared or not visible at this point.

The parent of a child package must be a library package and must be declared prior to a child package.  If using an index file the parent must have an entry in the index file and the child package must be listed as a component of the parent package.

***        Semantic Error              :141: The private type XXX is either undeclared or not visible at this point.

***        Semantic Error              :142: The subprogram prefix XXX is either undeclared or not visible at this point.

The prefix should appear in the inherit clause of the current package.

***        Semantic Error              :143: The subprogram YYY.XXX is either undeclared or not visible at this point.

***        Semantic Error              :144: The dotted name YYY.XXX is either undeclared or not visible at this point.

The name must denote an entire variable or an own variable of a package.  If the variable or own variable is declared in a separate (or parent) package, the package must be included in an inherit clause and the identifier prefixed with the package name.

***        Semantic Error              :145: The identifier YYY.XXX is either undeclared or not visible at this point.

The identifier should be a typemark.  If the typemark is declared in a separate (or parent) package, the package must be included in an inherit clause and the identifier prefixed with the package name. Ensure that there are no errors in the declaration of the typemark.

***        Semantic Error              :148: The abstract proof type XXX may not be used to define an own variable in another package.

Own variables may be "type announced" as being of an abstract proof type only where that type is declared later in the same package. Thus --# own State : T; is legal if --# type T is abstract; appears later in the package; however, --# own State : P.T; is illegal if T is an abstract proof type declared in remote package P.

***        Semantic Error              :149: More than one own variable has been announced as being of type XXX which may not therefore be declared as an abstract proof type.

Occurs when an own variable clause announces more than one own variable as being of a type XXX and XXX is later declared as being of an abstract proof type. Each abstract own variable must be of a unique type.

***        Semantic Error              :150: Entire variable expected. The names of constants never appear in mandatory annotations.

Issued when a the name of a constant is found in a mandatory annotation such as a global or derives annotation.  Constants should not appear in such annotations.

***        Semantic Error              :151: The variable XXX does not occur either in the package own variable list or as a refinement constituent.

A variable declared in a package must have been previously announced as either an own variable or as a concrete refinement constituent of an own variable.

***        Semantic Error              :152: The number of formal parameters is not consistent with the previous declaration of XXX.

***        Semantic Error              :153: The declaration of formal parameter XXX is not consistent with the subprogram's previous declaration.

Issued if the name, type or parameter mode of a parameter is different in the subprogram body declaration from that declared originally.

***        Semantic Error              :154: The subprogram or task body XXX does not have an annotation.

A subprogram or task body must have a global annotation if it references global variables; a procedure or task body must have a dependency relation to perform information flow analysis.

***        Semantic Error              :155: Unexpected annotation - all annotations required for procedure or task body XXX have already occurred.

Do not repeat global or derives annotations in the body (or body stub) of a subprogram, entry or task except for state (own variable) refinement.

***        Semantic Error              :156: Entire variable expected.

Issued when an identifier which SPARK requires to be an entire variable represents something other than this. Most commonly this message occurs when a component of a structured variable appears in a core annotation.

***        Semantic Error              :157: The name XXX already appears in the global variable list.

***        Semantic Error              :158: XXX is a formal parameter of this subprogram.

Issued in a global annotation if it names a formal parameter of the subprogram.

***        Semantic Error              :159: The name XXX has already appeared as an exported variable.

***        Semantic Error              :160: The name XXX already appears in the list of imported variables.

***        Semantic Error              :161: Exportation of XXX is incompatible with its parameter mode.

Issued if a parameter appears as an export to a procedure when it is of parameter mode in.

***        Semantic Error              :162: Importation of XXX is incompatible with its parameter mode.

Issued if a parameter appears as an import to a procedure when it is of parameter mode out.

***        Semantic Error              :163: Subprogram XXX cannot be called from here.

SPARK contains rules to prevent construction of programs containing recursive subprogram calls; this error message occurs if a procedure or function is called before its body has been declared. Re-ordering of subprogram bodies in the package concerned will be required.

***        Semantic Error              :165: This parameter is overlapped by another one, which is exported.

Violation of the anti-aliasing rule.

***        Semantic Error              :166: This parameter is overlapped by an exported global variable.

Violation of the anti-aliasing rule.

***        Semantic Error              :167: Imported variable XXX is not named in the initialization specification of its package.

Issued when an own variable which is imported into the main program procedure (or a task when the Ravenscar profile is enabled) has not been declared as being initialized by its package.  At the main program level the only imports that are permitted are initialized own variables of inherited packages.  There are two possible cases to consider: (1) the main program should be importing the variable in which case it should be annotated in its package with --# initializes (and, of course, actually initialized in some way) or be an external variable or protected variable which is implicitly initialized; or (2) the own variable concerned is not initialized at elaboration, should not therefore be considered an import to the main program and should be removed from the main program's import list.

***        Semantic Error              :168: XXX is a loop parameter, whose updating is not allowed.

***        Semantic Error              :169: Global variables of function subprograms must be of mode in.

It is an important property of SPARK  that functions cannot have side-effects, therefore only the reading of global variable is permitted.  It is usually convenient to omit modes from function global annotations but use of mode 'in' is permitted.

***        Semantic Error              :170: XXX is a formal parameter of mode in, whose updating is not allowed.

***        Semantic Error              :171: XXX is a formal parameter of mode out, whose value cannot be read.

***        Semantic Error              :172: The actual parameter associated with an exported formal parameter must be an entire variable.

Issued if an actual parameter which is an array element is associated with an exported formal parameter in a procedure call. Exported parameters must be either entire variables or a record field.

***        Semantic Error              :173: This exported parameter is named in the global definition of the procedure.

Violation of the anti-aliasing rule.

***        Semantic Error              :174: XXX is not an own variable.

Occurs in initialization specifications if something other than a variable is listed as being initialized.

***        Semantic Error              :175: “all” can only be used in a justification when using a code generator profile.

A justification of an error requires the actual variables named in the error message to be referenced.  The keyword “all” can only be used with language profiles for auto-code generators such as SCADE KCG.  Such profiles are only available with the SPARK Pro Toolset.

***        Semantic Error              :176: XXX does not have a derives annotation so it may not be called from YYY which is a function or does have a derives annotation.

When analysing with flow=auto, a procedure or entry without a derives annotation may not be called by a subprogram, task or entry with a derives annotation. This is because the body of the caller must be checked against its derives annotation. In order to calculate the correct dependency relation for the body of the caller there must be derives annotations present on all called procedures or entries. Functions are considered to have implicit derives annotations.

***        Semantic Error              :180: Entire composite constant expected.

Issued when an identifier which SPARK requires to be an entire composite constant represents something other than this.

***        Semantic Error              :181: Invalid policy for constant proof rule generation.

***        Semantic Error              :182: Rule Policy for YYY.XXX already declared in current scope.

Issued when a rule policy has already been declared for this constant within this declarative region. This rule policy will be ineffective.

***        Semantic Error              :190: The name XXX already appears in the inherit clause.

***        Semantic Error              :191: The name XXX already appears in the with clause.

***        Semantic Error              :200: The parameter XXX is neither imported nor exported.

Each formal parameter of a subprogram shall be imported or exported or both.

***        Semantic Error              :201: The global variable XXX is neither imported nor exported.

Every variable in a global definition must also appear in the associated derives annotation where it will be either imported or exported or both.

***        Semantic Error              :250: The 'Size value for type XXX has already been set.

***        Semantic Error              :251: The attribute value for XXX'Size must be of an integer type.

***        Semantic Error              :252: The attribute value for XXX'Size must be a static simple expression.

The value of 'Size must be static and must be of an integer type.

***        Semantic Error              :253: The attribute value for XXX'Size must not be negative.

The value of 'Size must be a positive integer or zero.

***        Semantic Error              :254: The Size attribute can only be specified for a first subtype.

Setting 'Size for a user-defined non-first subtype is not permitted. See Ada95 LRM 13.3(48).

***        Semantic Error              :255: The Address attribute can only be specified for a variable, a constant, or a program unit.

Ada95 LRM Annex N.31 defines a program unit to be either a package, a task unit, a protected unit, a protected entry, a generic unit, or an explicitly declared subprogram other than an enumeration literal.

***        Semantic Error              :273: Own variable XXX may not be refined because it was declared with a type mark which has not subsequently been declared as an abstract proof type.

Where a type mark is included in an own variable declaration it indicates that the own variable will either be of a concrete type of that name (which may be either already declared or be declared later in the package) or of an abstract proof type declared in the package specification.  In the former case the refinement is illegal because own variables of concrete Ada types may not be refined.  In the latter case it is legal; however, no suitable proof type declaration has been found in this case.

***        Semantic Error              :300: Renaming declarations are not allowed here.

A renaming declaration must be the first declarative item of a package body or main program or it must be placed immediately after the declaration of an embedded package.

***        Semantic Error              :301: Renaming or use type declarations here can only rename subprograms in package XXX.

A renaming declaration may be placed immediately after the declaration of an embedded package; in this case it may only rename subprograms declared in that package.

***        Semantic Error              :302: The subprogram specification in this renaming declaration is not consistent with the declaration of subprogram XXX.

Issued in a subprogram renaming declaration if it contains parameter names, numbers or types which differ from those originally declared.

***        Semantic Error              :303: An operator can only be renamed by the same operator.

Issued if a renaming declaration has a different operator on each side of the reserved word RENAMES.

***        Semantic Error              :304: A renaming declaration for operator XXX is not allowed.

***        Semantic Error              :305: The specification in this renaming declaration is not consistent with the implicit declaration of operator XXX.

Issued in an operator renaming declaration if it contains types which differ from those applicable to the operator being renamed.

***        Semantic Error              :306: Operator XXX is already visible.

Occurs in an operator renaming declaration if an attempt is made to rename an operator which is already visible. (The message will also appear as a secondary consequence of trying to rename an operator between undeclared types.).

***        Semantic Error              :307: The implicit declaration of this operator does not occur in package XXX.

***        Semantic Error              :308: Type is limited.

Issued if an attempt is made to assign a variable of a type which is limited or which contains a limited type.

***        Semantic Error              :309: Operator not visible for these types.

This message means that the operator exists between the types on each side of it but that it is not visible. The most likely cause is that the types concerned are defined in another package and that renaming is required to make the operator visible.

***        Semantic Error              :310: The % operator may only appear in an assert or check statement in a for loop.

The % operator is used to indicate the value of a variable on entry to a for loop.  This is because the variable may be used in the exit expression of the loop and may also be modified in the body of the loop.  Since the semantics of Ada require the exit expression to be fixed after evaluation we require a way of reasoning about the original value of a variable prior to any alteration in the loop body.  No other situation requires this value so % may not be used anywhere else.

***        Semantic Error              :311: Announced own variable types may not be implemented as unconstrained arrays.

Where an an own variable is announced as being of some type, SPARK requires that type to be declared; the declaration cannot be in the form of an unconstrained array because SPARK prohibits unconstrained variables.

***        Semantic Error              :312: A subprogram can only be renamed to the same name with the package prefix removed.

***        Semantic Error              :313: Only one main program is permitted.

***        Semantic Error              :314: Own variable XXX has been refined and may not appear here.

Issued if an attempt is made to use, in a second annotation, an own variable which has been refined. Second annotations should use the appropriate refinement constituents of the own variable.

***        Semantic Error              :315: Unsupported proof context.

Certain proof contexts have been included in the syntax of SPARK but are not yet supported; this error message results if one is found.

***        Semantic Error              :316: Selected components are not allowed for XXX since type YYY is private here.

If a type is private, then record field selectors may not be used. In pre- and post-conditions, a proof function can be declared to yield the required attribute of a private type.

***        Semantic Error              :317: Tilde, in a function return annotation, may only be applied to an external variable of mode IN.

The tilde decoration indicates the initial value of a variable or parameter which is both imported and exported. A function may not have an explicit side effect on a program variable and so cannot be regarded as exporting suhc a variable.  For modelling purposes a read of an external (stream) variable is regarded as having a side effect (outside the SPARK boundary).  Since it may be necessary to refer to the initial value of the external variable, before this implicit side effect occurs, the use of tilde is allowed only for external variables of mode IN which are globally referenced by function.

***        Semantic Error              :318: Tilde or Percent may only be applied to variables.

The tilde decoration indicates the initial value of a variable or parameter which is both imported and exported. Percent indicates the value of a variable on entry to a for loop; this message occurs if either operator is applied to any other object.

***        Semantic Error              :319: Tilde may only be applied to a variable which is both imported and exported.

The tilde decoration indicates the initial value of a variable or parameter which is both imported and exported; this message occurs if the variable concerned is either exported only or imported only in which case no distinction between its initial and final value is required.

***        Semantic Error              :320: Tilde or Percent may only be applied to an entire variable.

Tilde (and %) may not be applied to an element of an array or field of a record. e.g. to indicate the initial value of the Ith element of array V use V~(I) not V(I)~.

***        Semantic Error              :321: Tilde may not appear in pre-conditions.

Since it does not make sense to refer to anything other than the initial value of a variable in a pre-condition there is no need to use tilde to distinguish initial from final values.

***        Semantic Error              :322: Only imports may be referenced in pre-conditions or return expressions.

Pre-conditions are concerned with the initial values of information carried into a subprogram. Since only imports can do this only imports can appear in pre-condition expressions.

***        Semantic Error              :323: Updates may only be applied to records or arrays.

The extended SPARK update syntax is only used to express changes to components of a structured variable.

***        Semantic Error              :324: Only one field name may appear here.

When using the extended SPARK update syntax for a record, you can not update more than one element in each clause of the update. For example, you cannot use [x,y =&gt; z], you must instead use [x =&gt; z; y =&gt; z].

***        Semantic Error              :325: Type XXX has not been declared.

Occurs if a type is "announced" as part of an own variable clause and the end of the package is reached without an Ada declaration for a type of this name being found.

***        Semantic Error              :326: Predicate is not boolean.

Occurs anywhere where a proof context is found not to be a boolean expression.

***        Semantic Error              :327: XXX is a global variable which may not be updated in a function subprogram.

***        Semantic Error              :328: The identifier XXX (exported by called subprogram) may not be updated in a function subprogram.

Occurs if a function calls a procedure which exports a global variable; this would create an illegal side-effect of the function.

***        Semantic Error              :329: Illegal function call.

Issued if a call is made to a user-defined subprogram in a package initialization part.

***        Semantic Error              :330: Illegal use of an own variable not of this package.

Issued if an attempt is made, in a package initialization part, to update an own variable of a non-enclosing package.

***        Semantic Error              :331: Private types may not be unconstrained arrays.

***        Semantic Error              :332: This private type was not declared as limited.

Issued where the type contains a component which is a limited private type, but where the declaration of this type in the visible part of the package does not specify that the type is limited.

***        Semantic Error              :333: Initialization of XXX is not announced in the initialization clause of this package.

Issued when an own variable is initialized either by assignment or by having a pragma Import attached to it when initialization of the variable is not announced in its package's own variable initialization specification.

***        Semantic Error              :334: Identifier XXX is not the name of a function.

***        Semantic Error              :335: This annotation should be placed with the declaration of function XXX.

Issued if a function is declared in a package specification without an annotation but one is then supplied on the function body.

***        Semantic Error              :336: Unexpected annotation - all annotations required for function XXX have already occurred.

***        Semantic Error              :337: Package XXX may not be used as a prefix here.

Selected component notation may not be used in places where an item is directly visible.

***        Semantic Error              :338: Scalar parameter XXX is of mode in out and must appear as an import.

Parameters passed as mode in out must be listed as imports in the subprogram's dependency relation if they are of scalar types. The rule also applies to a parameter of a private type if its full declaration is scalar.

***        Semantic Error              :339: Subprogram XXX was not declared in package YYY.

***        Semantic Error              :340: Only operators may be renamed in package specifications.

User-declared subprograms may not be renamed in package specifications although the implicitly declared function subprograms associated with operators may be.

***        Semantic Error              :341: A range may not appear here.

Issued if a range is found where a single value is expected, for example, if an array slice is constructed.

***        Semantic Error              :342: This proof annotation should be placed with the declaration of subprogram XXX.

Like global and derives annotations, proof annotations should be placed on the first appearance of a subprogram.  There may also be a requirement for a second proof annotation on a subprogram body where it references an abstract own variable.

***        Semantic Error              :343: Unexpected proof annotation - all annotations required for subprogram XXX have already occurred.

Issued if a second proof annotation for a subprogram is found but the subprogram does not reference any abstract own variables.  A second annotation is only required where it is necessary to express both an abstract (external) and a refined (internal) view of an operation.

***        Semantic Error              :399: Range error in annotation expression.

Issued if a proof annotation contains an expression that would cause a constraint error if it were in an executable Ada statement.  For example: "--# post X = T'Succ(T'Last);" VCs generated from such malformed predicates would always be unprovable.

***        Semantic Error              :400: Expression contains division by zero.

Issued when a static expression, evaluated using perfect arithmetic, is found to contain a division by zero.

***        Semantic Error              :401: Illegal numeric literal.

Issued when a numeric literal is illegal because it contains, for example, digits not compatible with its number base.

***        Semantic Error              :402: Constraint_Error will be raised here.

Issued whenever a static expression would cause a constraint error. e.g. assigning a value to a constant outside the constant's type range. In SPARK a static expression may not yield a value which violates a range constraint.

***        Semantic Error              :403: Argument value is inconsistent with the number of dimensions of array type XXX.

Issued when an array attribute containing an argument is found and the value of the argument is inconsistent with the number of dimensions of the array type to which it is being applied.

***        Semantic Error              :406: Only scalar or non-tagged record subtypes may be declared without a constraint.

Issued if a subtype declaration of the form subtype S is T is used where T is not a scalar or non-tagged record type. Secondly, T must not be private at the point of this declaration.

***        Semantic Error              :407: This choice overlaps a previous one.

Choices in case statements and array aggregates may not overlap.

***        Semantic Error              :408: Case statement is incomplete.

A case statement must either explicitly supply choices to cover the whole range of the (sub)type of the controlling expression, or it must supply an others choice.

***        Semantic Error              :409: Empty range specified.

In SPARK, no static range is permitted to be null.

***        Semantic Error              :410: Choice out of range.

The choices in case statements and array aggregates must be within the constraints of the appropriate (sub)type.

***        Semantic Error              :411: Others clause required.

Issued where an others clause is required to satisfy the Ada language rules.

***        Semantic Error              :412: Explicit boolean range not permitted.

***        Semantic Error              :413: Invalid range constraint.

Issued where a range constraint is outside the range of the (sub)type to which the constraint applies.

***        Semantic Error              :414: Array aggregate is incomplete.

An array aggregate must either explicitly supply values for all array elements or provide an others clause.

***        Semantic Error              :415: Too many entries in array aggregate.

Issued where an array aggregate using positional association contains more entries than required by the array index type.

***        Semantic Error              :416: Type may not have an empty range.

***        Semantic Error              :417: String subtypes must have a lower index bound of 1.

***        Semantic Error              :418: Index upper and/or lower bounds do not match those expected.

Issued where assignment, association or type conversion is attempted between two different constrained subtypes of the same unconstrained array type, and where the index bounds do not match.

***        Semantic Error              :419: XXX.YYY has been renamed locally, so the prefix XXX must not be used.

When an entity is renamed, the fully qualified name is no longer visible, and so must not be used.

***        Semantic Error              :420: Array index(es) not convertible.

Issued when an attempt is made to convert between two arrays whose indexes are neither of the same type nor numeric.

***        Semantic Error              :421: Array components are not of the expected type.

Issued when a type conversion attempts to convert between two array types whose components are of different types.

***        Semantic Error              :422: Array component constraints do not match those expected.

Issued when a type conversion attempts to convert between two array types whose components are of the same type but do not have constraints which can be statically determined to be identical.

***        Semantic Error              :423: Array has different number of dimensions from that expected.

Issued when attempting to convert between two array types which have different numbers of dimensions.

***        Semantic Error              :424: Attributes are not permitted in a String concatenation expression.

Character attributes such as ‘Val, ‘Pos, ’Succ and ‘Pred are not permitted below a concatenation operator in a String expression.

***        Semantic Error              :425: String literals may not be converted.

Issued if the argument of a type conversion is a string literal. A common cause is an attempt to type qualify a string and accidentally omitting the tick character.

***        Semantic Error              :500: Mode expected.

Issued when performing data flow analysis only where a subprogram has no dependency clause and its global variables have not been given modes in the global annotation.

***        Semantic Error              :501: Dependency relation expected.

A dependency relation is required for each procedure if information flow analysis is to be performed.

***        Semantic Error              :502: Exportation of XXX is incompatible with its global mode.

Issued when a procedure has both a global annotation with modes and a dependency relation, and a global of mode in is listed as an export in the dependency  relation.

***        Semantic Error              :503: Importation of XXX is incompatible with its global mode.

Issued when a procedure has both a global annotation with modes and a dependency relation, and a global of mode out is listed as an import in the dependency relation.

***        Semantic Error              :504: Parameter XXX is of mode in out and must appear as an import.

***        Semantic Error              :505: Global variable XXX is of mode in out and must appear as an import.

Issued where a procedure has both a global annotation with modes and a dependency relation, and a global variable of mode in out is not listed as an import in the dependency relation.

***        Semantic Error              :506: Parameter XXX is of mode in out and must appear as an export.

***        Semantic Error              :507: Global variable XXX is of mode in out and must appear as an export.

Issued where a procedure has both a global annotation with modes and a dependency relation, and a global variable of mode in out is not listed as an export in the dependency relation.

***        Semantic Error              :508: This global variable is a parameter of mode in and can only have the global mode in.

***        Semantic Error              :509: Unexpected refined dependency relation.

When using refinement in automatic flow analysis mode, if there is a dependency relation on the subprogram specification then there must also be one on the body. Similarly, if there is no dependency relation on the specification then the body is not permitted to have one.

***        Semantic Error              :550: use type clauses may only be used in SPARK95: clause ignored.

***        Semantic Error              :551: All operators for type XXX are already visible.

***        Semantic Error              :552: The type XXX already appears in the use type clause.

***        Semantic Error              :554: XXX is a limited private type for which no operators can be made visible.

***        Semantic Error              :555: XXX is not mentioned in an earlier with clause of this compilation unit.

***        Semantic Error              :600: pragma Import has a minimum of 2 and a maximum of 4 parameters.

***        Semantic Error              :601: Convention, Entity, External_Name or Link_Name expected.

***        Semantic Error              :602: An association for XXX has already been given.

***        Semantic Error              :603: No association for XXX was given.

***        Semantic Error              :604: This package may not have a body - consider use of pragma Elaborate_Body.

In Ada 95, a package body is illegal unless it is required for the purpose of providing a subprogram body, or unless this pragma is used. This error is issued where a package body is found for a package whose specification does not require a body.

***        Semantic Error              :605: pragma Elaborate_Body has one parameter.

***        Semantic Error              :606: This expression does not represent the expected package name XXX.

Issued when the parameter to a pragma Elaborate_Body is invalid.

***        Semantic Error              :607: This package requires a body and must therefore include either pragma Elaborate_Body or a subprogram declaration.

Issued where a package specification contains no subprogram declarations, but whose own variables (as specified in the package annotation) are not all declared (and initialized where appropriate) in the package specification. This is because such a package is not allowed a body in Ada 95 unless either the pragma is given or a subprogram declared.

***        Semantic Error              :608: Reduced accuracy subtypes of real numbers are considered obsolescent and are not supported by SPARK.

***        Semantic Error              :609: This entity cannot be assigned to.

***        Semantic Error              :610: Child packages may not be used in SPARK83.

***        Semantic Error              :611: Illegal use of deferred constant prior to its full declaration.

***        Semantic Error              :613: Illegal name for body stub.

Issued if a dotted name appears in a body stub as in "package body P.Q is separate". No legal stub could ever have such a name.

***        Semantic Error              :614: Child packages may be declared only at library level.

Issued if an attempt is made to declare a child package which is embedded in a package or subprogram.

***        Semantic Error              :615: Name does not match name of package.

Issued if the closing identifier of a package has a different number of identifiers from the name originally given for the package. For example "package P.Q is ... end P.Q.R;".

***        Semantic Error              :616: The private package XXX is not visible at this point.

Issued if an attempt is made to with or inherit a private package from the visible part of a public package.

***        Semantic Error              :617: Public sibling XXX is not visible at this point.

Arises from attempting to inherit a public sibling child package from a private child package.

***        Semantic Error              :618: The owner of the current package does not inherit the package XXX.

A private descendent (although it may be a public package) can only inherit a remote package if its parent also inherits it; this is a analogous to the behaviour of embedded packages which may also only inherit a remote package if their enclosing package also does so.

***        Semantic Error              :619: The package XXX is not owned by the current package.

This message indicates an attempt to claim that own variables of a package other than a private child package of the current package are refinement constituents of an abstract own variable of the current package.

***        Semantic Error              :620: Own variables here must be refinement constituents in package owner XXX.

Own variables of private child packages must appear as refinement constituents of the package which owns the child. If the Examiner has seen the owner package body before processing the child and has not found the required refinement constituent then this message results on processing the child.

***        Semantic Error              :621: Own variable XXX expected as a refinement constituent in this package.

Own variables of private child packages must appear as refinement constituents of the package which owns the child. If the Examiner has seen a child package which declares an own variable before examining its owner&#146;s body then this message is issued if the owner lacks the required refinement constituent declaration.

***        Semantic Error              :622: Own variable XXX did not occur in an initialization specification.

Issued if an own variable appears in an initialization clause and is also a refinement constituent of an own variable which is not marked as initialized.

***        Semantic Error              :623: Own variable XXX occurred in an initialization specification.

Issued if an own variable does not appear in an initialization clause and is also a refinement constituent of an own variable that is marked as initialized.

***        Semantic Error              :624: All operators from ancestor package XXX are already visible.

A package must appear in a with clause before types declared in it can be specified in a use type clause.

***        Semantic Error              :627: The analysis of generic body XXX is not yet supported. It will be supported in a future release of the Examiner.

 

***        Semantic Error              :630: XXX is not the name of generic subprogram.

Only generic subprogram can be instantiated.

***        Semantic Error              :631: Generic function found where a generic procedure was expected.

Subprogram kind of generic and its instantiation must match.

***        Semantic Error              :632: Generic procedure found where a generic function was expected.

Subprogram kind of generic and its instantiation must match.

***        Semantic Error              :633: Generic actual part expected,  generic unit XXX has generic formal parameters.

The number of generic formal and actual parameters must match exactly.

***        Semantic Error              :634: Unexpected generic actual part,  generic unit XXX does not have any generic formal parameters.

The number of generic formal and actual parameters must match exactly.

***        Semantic Error              :635: Incorrect number of generic actual parameters for instantiation of generic unit XXX.

The number of generic formal and actual parameters must match exactly.

***        Semantic Error              :636: Type XXX is not compatible with generic formal parameter YYY.

See ALRM 12.5.  Each generic formal type parameter must be supplied with an actual type which is of a compatible class.  Note that SPARK does not have default values for such associations.

***        Semantic Error              :637: User-defined generic units are not permitted in SPARK 83.

There are weaknesses in the generic type model of Ada 83 that prevent the implementation of a safe subset of generics in SPARK 83.  These deficiences are overcome in Ada 95. SPARK 83 users may employ the predefined unit Unchecked_Conversion only.

***        Semantic Error              :638: Unexpected global annotation.  A generic subprogram  may not reference or update global variables.

A standalone generic subprogram may not have a global annotation.  Note that a subprogram in a generic package may have a global annotation as long as it only refers to own variables that are local to the package.

***        Semantic Error              :639: A generic formal object may only have default mode or mode in.

SPARK restricts formal objects to being constants in order to avoid concealed information flows.

***        Semantic Error              :640: A generic formal object may only be instantiated with a constant expression.

SPARK restricts formal objects to being constants in order to avoid concealed information flows.

***        Semantic Error              :641: There is no generic subprogram declaration named XXX so a generic body of that name cannot be declared here.

A generic body must be preceded by a generic declaration of the same name.

***        Semantic Error              :645: Actual array element XXX is not compatible with the element type YYY of the generic formal parameter.

See ALRM 12.5.  Each generic formal type parameter must be supplied with an actual type which is of a compatible class.  Note that SPARK does not have default values for such associations.

***        Semantic Error              :646: Actual array index XXX is not compatible with the index type YYY of the generic formal parameter.

See ALRM 12.5.  Each generic formal type parameter must be supplied with an actual type which is of a compatible class.  Note that SPARK does not have default values for such associations.

***        Semantic Error              :647: Actual array XXX has more dimensions than formal array YYY.

See ALRM 12.5.  Each generic formal type parameter must be supplied with an actual type which is of a compatible class.  Note that SPARK does not have default values for such associations.

***        Semantic Error              :648: Actual array XXX has fewer dimensions than formal array YYY.

See ALRM 12.5.  Each generic formal type parameter must be supplied with an actual type which is of a compatible class.  Note that SPARK does not have default values for such associations.

***        Semantic Error              :649: Actual array XXX is constrained but the associated formal YYY is unconstrained.

See ALRM 12.5.  Each generic formal type parameter must be supplied with an actual type which is of a compatible class.  Note that SPARK does not have default values for such associations.

***        Semantic Error              :650: Actual array XXX is unconstrained but the associated formal YYY is constrained.

See ALRM 12.5.  Each generic formal type parameter must be supplied with an actual type which is of a compatible class.  Note that SPARK does not have default values for such associations.

***        Semantic Error              :651: Variables of generic types may not be initialized at declaration.

In non-generic code we statically know the value being assigned to the variable and can check that it is in range.  In the case of a generic we cannot do this because we do not know the bounds of the variable's type.  The variable may, however, be assigned to in the sequence of statements in the generic body because generation of run-time checks will provide suitable protection from out-of-range values.

***        Semantic Error              :652: Subtypes of generic types are not permitted.

In non-generic code we statically know the values being used as the range bounds for a subtype and can check that they are in range.  In the case of a generic we cannot do this because we do not know the bounds of the variable's type.

***        Semantic Error              :653: Constants of generic types are not permitted.

In non-generic code we statically know the value being assigned to the constant and can check that it is in range.  In the case of a generic we cannot do this because we do not know the bounds of the constant's type.  A variable, assigned to in the sequence of statements in the generic body, may be a suitable substitute for such a constant.

***        Semantic Error              :654: XXX is a generic subprogram which must be instantiated before it can be called.

Generic units provide a template for creating callable units and are not directly callable.

***        Semantic Error              :655: Invalid prefix, XXX is a generic package.

Components of generic packages cannot be accessed directly.  First instantiate the package and then access components of the instantiation.

***        Semantic Error              :656: The only currently supported attribute in this context is 'Always_Valid.

***        Semantic Error              :657: A 'Always_Valid assertion requires a variable here.

The 'Always_Valid assertion can only be applied to variables or to components of record variables.

***        Semantic Error              :658: The object in this assertion must be scalar or a non-tagged aggregation of scalar components.

The 'Always_Valid assertion can only be applied to objects which are:

(1) of a scalar type,

(2) a one dimensional array of scalar components,

(3) an entire record variable of a non-tagged type with all components that are either scalar or an array of scalar components,

(4) an array variable whose components are records satisfying (3).

Additionally a field of a record satisfying these constraints may be marked individually as always valid.

***        Semantic Error              :659: A 'Always_Valid assertion must be in the same declarative region as contains the declaration of the variable to which it refers.

***        Semantic Error              :660: A 'Always_Valid assertion must not be applied to an object already marked as always valid.

***        Semantic Error              :662: Only Mode in own variables and constituents can be marked using 'Always_Valid.

The 'Always_Valid assertion can only be applied to variables which are own variables with the mode in, or to subcomponents of records which are mode in own variables.

***        Semantic Error              :700: Mode 'in out' may not be applied to own variables or their refinement constituents.

Own variables may be given a mode to indicate that they are system level inputs or outputs (i.e. they obtain values from or pass values to the external environment).  Since effective SPARK design strictly separates inputs from outputs the mode 'in out' is not permitted.

***        Semantic Error              :701: The mode of this refinement constituent is not consistent with its subject:  XXX.

If an abstract own variable is given a mode then its refinement constituents must all be of the same mode.

***        Semantic Error              :702: Own variable XXX must be given the mode 'in' to match its earlier announcement .

Issued if an own variable of an embedded package is not given the same mode as the earlier refinement constituent that announced it would exist.

***        Semantic Error              :703: Own variable XXX must be given the mode 'out' to match its earlier announcement .

Issued if an own variable of an embedded package is not given the same mode as the earlier refinement constituent that announced it would exist.

***        Semantic Error              :704: Own variable XXX may not have a mode because one was not present in its earlier announcement .

Issued if an own variable of an embedded package is given a mode when the earlier refinement constituent that announced it would exist did not have one.

***        Semantic Error              :705: Refinement constituent XXX must be given the mode 'in' to match the child package own variable with which it is being associated.

If a refinement constituent is an own variable of a private package then the constituent must have the same mode as the own variable to which it refers.

***        Semantic Error              :706: Refinement constituent XXX must be given the mode 'out' to match the child package own variable with which it is being associated.

If a refinement constituent is an own variable of a private package then the constituent must have the same mode as the own variable to which it refers.

***        Semantic Error              :707: Refinement constituent XXX may not have a mode because one was not present on the child package own variable with which it is being associated.

If a refinement constituent is an own variable of a private package then the constituent can only be given a mode if the own variable to which it  refers has one.

***        Semantic Error              :708: Own variable XXX has a mode and may not appear in an initializes clause.

Mode own variables (stream variables) are implicitly initialized by the environment to which they are connected and may not appear in initializes clauses since this would require their explicit initialization.

***        Semantic Error              :709: Own variable or constituent XXX has mode 'out' and may not be referenced by a function.

Functions are permitted to reference own variables that are either unmoded or of mode 'in'.  Since mode 'out' own variables represent outputs to the environment, reading them in a function does not make sense and is not allowed.

***        Semantic Error              :710: The own variable or constituent XXX is of mode 'in' and can only have global mode 'in'.

Global modes, if given, must be consistent with the modes of own variables that appear in the global list.

***        Semantic Error              :711: The own variable or constituent XXX is of mode 'out' and can only have global mode 'out'.

Global modes, if given, must be consistent with the modes of own variables that appear in the global list.

***        Semantic Error              :712: The own variable or constituent XXX is of either mode 'in' or mode 'out' and  may not have global mode 'in out'.

Global modes, if given, must be consistent with the modes of own variables that appear in the global list.

***        Semantic Error              :713: The own variable or constituent XXX is of mode 'in' and may not appear in a dependency clause as an export.

Own variables with mode 'in' denote system-level inputs; their exportation is not allowed.

***        Semantic Error              :714: The own variable or constituent XXX is of mode 'out' and may not appear in a dependency clause as an import.

Own variables with mode 'out' denote system-level outputs; their importation is not allowed.

***        Semantic Error              :715: Function XXX references external (stream) variables and may only appear directly in an assignment or return statement.

To avoid ordering effects, functions which globally access own variables which have modes (indicating that they are connected to the external environment) may only appear directly in assignment or return statements. They may not appear as actual parameters or in any other form of expression.

***        Semantic Error              :716: External (stream) variable XXX may only appear directly in an assignment or return statement; or as an actual parameter to an unchecked conversion.

To avoid ordering effects, own variables which have modes (indicating that they are connected to the external environment) may only appear directly in assignment or return statements. They may not appear as actual parameters (other than to instantiations of Unchecked_Conversion) or in any other form of expression.

***        Semantic Error              :717: External (stream) variable XXX is of mode 'in' and may not be assigned to.

Own variables with mode 'in' represent inputs to the system from the external environment. As such, assigning to them does not make sense and is not permitted.

***        Semantic Error              :718: External (stream) variable XXX is of mode 'out' and may not be referenced.

Own variables with mode 'out' represent outputs to the external environment from the system. As such, referencing them does not make sense and is not permitted.

***        Semantic Error              :719: External (stream) variables may not be referenced or updated during package elaboration.

Own variables with modes represent inputs and outputs between the external environment and the system. Referencing or updating them during package elaboration would introduce ordering effects and is not permitted.

***        Semantic Error              :720: Variable XXX is an external (stream) variable and may not be initialized at declaration.

Own variables with modes represent inputs and outputs between the external environment and the system. Referencing or updating them during package elaboration would introduce ordering effects and is not permitted.

***        Semantic Error              :721: This refined function global annotation may not reference XXX because it is an external (stream) variable whose abstract subject YYY does not have a mode.

Functions may be used to reference external (stream) variables and the Examiner generates the appropriate information flow to show that the value returned  by the function is 'volatile'.  If the abstract view of the same function shows it referencing an own variable which is not an external stream then the volatility of the function is concealed.  The error can be removed either by making the abstract own variable a mode 'in' stream or by using a procedure instead of a function to read the refined stream variable.

***        Semantic Error              :722: The mode on abstract global variable YYY must be made 'in out' to make it consistent with the referencing of mode 'in' external (stream) constituent XXX in the refined global annotation.

Where a procedure references an external (stream) variable of mode 'in' the Examiner constructs appropriate information flow to show that the input stream is 'volatile'. If the abstract view shows that the procedure obtains its result by simply reading an own variable which is not an external stream then the volatility is concealed.  The error can be removed either by making the global mode of XXX 'in out' or making XXX an external (stream) variable of mode 'in'.

***        Semantic Error              :723: Variable XXX must appear in this refined global annotation.

Issued when a global variable which is present in the first (abstract) global annotation is omitted from the second (refined) one.

***        Semantic Error              :724: Exit label must match the label of the most closely enclosing loop statement.

If an exit statement names a loop label, then the most closely enclosing loop statement must have a matching label.

***        Semantic Error              :725: Protected function or variable XXX may only appear directly in an assignment or return statement.

To avoid ordering effects, protected functions  may only appear directly in assignment or return statements. They may not appear as actual parameters or in any other form of expression.  Ordering effects occur because the global state referenced by the protected function may be updated by another process during expression evaluation.

***        Semantic Error              :730: A loop with no iteration scheme or exit statements may only appear as the last statement in the outermost scope of the main subprogram (or a task body when using the Ravenscar profile).

If a loop has neither an iteration scheme nor any exit statements then it will run forever. Any statements following it will be unreachable. SPARK only allows one such loop which must be the last statement of the main program.

***        Semantic Error              :750: The identifier YYY.XXX is either undeclared or not visible at this point. An array type may not be used as its own index type.

The type mark used for the index of an array type declaration must not be the same as the name of the array type being declared.

***        Semantic Error              :751: The identifier YYY.XXX is either undeclared or not visible at this point. A record type may not include fields of its own type.

The type mark given for a field in a record type declaration must not be the same as the name of the record type being declared.

***        Semantic Error              :752: The identifier YYY.XXX is either undeclared or not visible at this point. This identifier must appear in a preceding legal global annotation or formal parameter list.

For an identifier to appear legally as an import in a derives annotation, it must be a formal parameter or must appear legally in a preceding global annotation and must be of mode 'in' or mode 'in out'.

***        Semantic Error              :753: The identifier YYY.XXX is either undeclared or not visible at this point. This identifier must appear in a preceding legal global annotation or formal parameter list.

For an identifier to appear legally as an export in a derives annotation, it must be a formal parameter or must appear legally in a preceding global annotation and must be of mode 'out' or mode 'in out'.

***        Semantic Error              :754: The identifier YYY.XXX is either undeclared or not visible at this point. This package must be both inherited and withed to be visible here.

For a package name to be visible in Ada context, it must appear in both the inherit clause and the with clause of the enclosing package.

***        Semantic Error              :755: The identifier YYY.XXX is either undeclared or not visible at this point. A parent of a child package must be inherited to be visible here.

A parent of a child package must be inherited (but not withed) to be visible in that child.

***        Semantic Error              :756: The identifier YYY.XXX is either undeclared or not visible at this point. The grandparent of a child package should not appear in this prefix.

A grandparent of a child package should not be included in prefixes referencing a declaration of the child package.

***        Semantic Error              :770: If Any_Priority is defined, Priority and Interrupt_Priority must also be defined.

If the type Any_Priority is defined in package System, then the subtypes Priority and Interrupt_Priority must also be defined; if support for tasking is not required, then the definition of Any_Priority may be removed.

***        Semantic Error              :771: The parent type of this subtype must be Any_Priority.

Ada 95 requires that both Priority and Interrupt_Priority be immediate subtypes of Any_Priority.

***        Semantic Error              :772: The range of Priority must contain at least 30 values; LRM D.1(26).

Ada 95 requires that the range of the subtype Priority include at least 30 values; this requirement is stated in the Ada 95 Language Reference Manual at D.1(26).

***        Semantic Error              :773: Priority'First must equal Any_Priority'First; LRM D.1(10).

Ada 95 requires that task priority types meet the following criteria, the second of which is relevant to this 

·       subtype Any_Priority is Integer range implementation-defined;

·       subtype Priority is Any_Priority range Any_Priority'First .. implementation-defined;

·       subtype Interrupt_Priority is Any_Priority range Priority'Last+1 .. Any_Priority'Last.

***        Semantic Error              :774: Interrupt_Priority'First must equal Priority'Last + 1; LRM D.1(10).

Ada 95 requires that task priority types meet the following criteria, the third of which is relevant to this 

·       subtype Any_Priority is Integer range implementation-defined;

·       subtype Priority is Any_Priority range Any_Priority'First .. implementation-defined;

·       subtype Interrupt_Priority is Any_Priority range Priority'Last+1 .. Any_Priority'Last.

***        Semantic Error              :775: Interrupt_Priority'Last must equal Any_Priority'Last; LRM D.1(10).

Ada 95 requires that task priority types meet the following criteria, the third of which is relevant to this 

·       subtype Any_Priority is Integer range implementation-defined;

·       subtype Priority is Any_Priority range Any_Priority'First .. implementation-defined;

·       subtype Interrupt_Priority is Any_Priority range Priority'Last+1 .. Any_Priority'Last.

***        Semantic Error              :776: In SPARK95 mode, only packages Standard, System, Ada.Real_Time and Ada.Interrupts may be specified in the config file.

In SPARK95 mode, the packages that may be specified in the target configuration file are: Standard, System, Ada.Real_Time and Ada.Interrupts. The latter two are ignored unless the Ravenscar profile is selected.

***        Semantic Error              :777: In package System, Priority must be an immediate subtype of Integer.

Ada 95, and hence SPARK95, defines Priority as being an immediate subtype of Integer.

***        Semantic Error              :778: This identifier is not valid at this point in the target configuration file.

The specified identifier cannot be used here; it is most probably either not valid in the target configuration file at all, or might be valid in a different package, but not here.

***        Semantic Error              :779: Definition of this package in the target configuration file is not allowed in SPARK83 mode.

In SPARK83 mode, only package Standard may be specified in the target configuration file.

***        Semantic Error              :780: Type XXX must be private.

This type may only be declared as private in the target configuration file.

***        Semantic Error              :781: The lower bound of a signed integer type declaration must be greater than or equal to System.Min_Int.

This error can only be generated in SPARK95 mode when the configuration file specifies a value for System.Min_Int.

***        Semantic Error              :782: The upper bound of a signed integer type declaration must be less than or equal to System.Max_Int.

This error can only be generated in SPARK95 mode when the configuration file specifies a value for System.Max_Int.

***        Semantic Error              :783: Modulus must be less than or equal to System.Max_Binary_Modulus.

This error can only be generated in SPARK95 mode when the configuration file specifies a value for System.Max_Binary_Modulus.

***        Semantic Error              :784: System.Max_Binary_Modulus must be a positive power of 2.

***        Semantic Error              :785: The number of digits specified exceeds the value defined for System.Max_Digits.

The maximum decimal precision for a floating point type, where a range specification has not been included, is defined by System.Max_Digits.

***        Semantic Error              :786: The number of digits specified exceeds the value defined for System.Max_Base_Digits.

The maximum decimal precision for a floating point type, where a range specification has been included, is defined by System.Max_Base_Digits.

***        Semantic Error              :787: Digits value must be positive.

***        Semantic Error              :788: Delta value must be positive.

***        Semantic Error              :789: The only currently supported type attribute in this context is 'Base.

***        Semantic Error              :790: A base type assertion requires a type here.

***        Semantic Error              :791: The base type in this assertion must be a predefined type.

Predefined types are those defined either by the language, or in package Standard, using the configuration file mechanism.

***        Semantic Error              :792: The types in this assertion must both be either floating point or signed integer.

***        Semantic Error              :793: This base type must have a defined range in the configuration file.

If a predefined type is to be used in a base type assertion or in a derived type declaration, then it must appear in the configuration file and have a well-defined range.

***        Semantic Error              :794: Range of subtype exceeds range of base type.

***        Semantic Error              :795: A base type assertion must be in the same declarative region as that of the full type definition.

***        Semantic Error              :796: This type already has a base type: either it already has a base type assertion, or is explicitly derived, or it is a predefined type.

A base type assertion can only be given exactly once. Explicitly derived scalar types and predefined types never need a base type assertion.

***        Semantic Error              :797: The base type in a floating point base type assertion must have a defined accuracy.

***        Semantic Error              :798: The accuracy of the base type in a base type assertion must be at least that of the type which is the subject of the assertion.

***        Semantic Error              :799: Only a simple type can be the subject of a base type assertion .

***        Semantic Error              :800: Modulus must be a positive power of 2.

In SPARK, modular types must have a modulus which is a positive power of 2.

***        Semantic Error              :801: Modular types may only be used in SPARK95.

Ada83 (and hence SPARK83) does not include modular types.

***        Semantic Error              :803: Unary arithmetic operators are not permitted for modular types.

Unary arithmetic operators are of little value.  The "abs" and "+" operators have no effect for modular types, and so are not required.  The unary minus operator is a source of potential confusion, and so is not permitted in SPARK.

***        Semantic Error              :804: Universal expression may not be implicitly converted to a modular type here. Left hand operand requires qualification to type XXX.

A universal expression cannot be used as the left hand operand of a binary operator if the right hand operand is of a modular type.  Qualification of the left hand expression is required in this case.

***        Semantic Error              :805: Universal expression may not be implicitly converted to a modular type here. Right hand operand requires qualification to type XXX.

A universal expression cannot be used as the right hand operand of a binary operator if the left hand operand is of a modular type.  Qualification of the right hand expression is required in this case.

***        Semantic Error              :811: Unnecessary others clause - case statement is already complete.

***        Semantic Error              :814: Default_Bit_Order must be of type Bit_Order.

The only possible type for the constant System.Default_Bit_Order is System.Bit_Order when it appears in the configuration file.

***        Semantic Error              :815: The only allowed values of Default_Bit_Order are Low_Order_First and High_Order_First.

System.Bit_Order is implicity declared in package System when a configuration file is given. This is an enumeration type with only two literals Low_Order_First and High_Order_First.

***        Semantic Error              :820: Abstract types are not currently permitted in SPARK.

Only non-abstract tagged types are currently supported.  It is hoped to lift this restriction in a future Examiner release.

***        Semantic Error              :821: This type declaration must be a tagged record because it's private type is tagged.

If a type is declared as "tagged private" then its full declaration must be a tagged record.

***        Semantic Error              :822: XXX is not a tagged type; only tagged types may be extended.

In SPARK, "new" can only be used to declare a type extension; other derived types are not permitted.

***        Semantic Error              :823: This type may not be extended in the same package in which it is declared.

SPARK only permits types from another library package to be extended. This rule prevents overloading of inherited operations.

***        Semantic Error              :824: This package already extends a type from package XXX.  Only one type extension per package is permitted.

SPARK only permits one type extension per package.  This rule prevents overloading of inherited operations.

***        Semantic Error              :825: Type XXX expected in order to complete earlier private extension.

Since SPARK only permits one type extension per package it follows that the declaration "new XXX with private" in a package visible part must be paired with "new XXX with record..." in its private part.  The ancestor type XXX must be the same in both declarations.

***        Semantic Error              :826: Type extension is not permitted in SPARK 83.

Type extension is an Ada 95 feature not included in Ada or SPARK 83.

***        Semantic Error              :827: The actual parameter associated with a tagged formal parameter in an inherited operation must be an object not an expression.

There are several reasons for this SPARK rule.  Firstly, Ada requires tagged parameters to be passed by reference and so an object must exist at least implicitly. Secondly, in order to perform flow analysis of inherited subprogram calls, the Examiner needs identify what subset of the information available at the point of call is passed to and from the called subprogram.  Since information can only flow through objects it follows that actual parameter must be an object.

***        Semantic Error              :828: Tagged types and tagged type extensions may only be declared in library-level package specifications.

This SPARK rule facilitates the main uses of tagged types while greatly simplifying visibility rules.

***        Semantic Error              :829: Illegal re-declaration: this subprogram shares the same name as the inheritable root operation XXX but does not override it.

To avoid overloading, SPARK prohibits more than one potentially visible subprogram having the same name.

***        Semantic Error              :830: A private type may not be implemented as a tagged type or an extension of a tagged type.

This rule means that a private type can only be implemented as a tagged type if the private type itself is tagged.

***        Semantic Error              :831: Extended tagged types may only be converted in the direction of their root type.

This is an Ada rule: type conversions simply omit unused fields of the extended type.  It follows that conversions must be in the direction of the root type.

***        Semantic Error              :832: Only tagged objects, not expressions, may be converted.

For flow analysis purposes the Examiner needs to know what subset of the information in the unconverted view is available in the converted view.  Since information can only flow through objects it follows that only objects can be converted.

***        Semantic Error              :833: Invalid record aggregate: type XXX has a private ancestor.

If an extended type has a private ancestor then an extension aggregate  must be used rather than a normal aggregate.

***        Semantic Error              :834: Null records are only permitted if they are tagged.

An empty record can have no use in a SPARK program others than as a root type from which other types can be derived and extended. For this reason, null records are only allowed if they are tagged.

***        Semantic Error              :835: XXX is not an extended tagged record type.

An extension aggregate is only appropriate if the record type it is defining is an extended record.  A normal aggregate should be used for other record (and array) types.

***        Semantic Error              :836: This expression does not represent a valid ancestor type of the aggregate XXX.

The expression before the reserved word "with" must be of an ancestor type of the overall aggregate type.  In SPARK, the ancestor expression may not be a subtype mark.

***        Semantic Error              :837: Invalid record aggregate: there is a private ancestor between the type of this expression and the type of the aggregate XXX.

The ancestor type can be an tagged type with a private extension; however, there must be no private extensions between the ancestor type and the type of the aggregate.

***        Semantic Error              :838: Incomplete aggregate: null record cannot be used here because fields in XXX require values.

The aggregate form "with null record" can only be used if the type of the aggregate is a null record extension of the ancestor type.  If any fields are added between the ancestor type and the aggregate type then values need to be supplied for them so "null record" is inappropriate.

***        Semantic Error              :839: This package already contains a root tagged type or tagged type extension. Only one such declaration per package is permitted.

SPARK permits one root tagged type or one tagged type extension per package, but not both.  This rule prevents the declaration of illegal operations with more than one controlling parameter.

***        Semantic Error              :840: A tagged or extended type may not appear here. SPARK does not permit the declaration of primitive functions with controlling results.

A primitive function controlled by its return result would be almost unusable in SPARK because a data flow error would occur wherever it was used.

***        Semantic Error              :841: The return type in the declaration of this function contained an error. It is not possible to check the validity of this return type.

Issued when there is an error in the return type on a function's initial declaration.  In this situation we cannot be sure what return type is expected in the function's body. It would be misleading to simply report a type mismatch since the types might match perfectly and both be wrong.  Instead, the Examiner reports the above error and refuses to analyse the function body until its specification is corrected.

***        Semantic Error              :842: Pragma Atomic_Components is not permitted in SPARK when the Ravenscar profile is selected.

***        Semantic Error              :843: Pragma Volatile_Components is not permitted in SPARK when the Ravenscar profile is selected.

***        Semantic Error              :844: Missing or contradictory overriding_indicator for operation XXX. This operation successfully overrides its parent operation. In SPARK 2005, an operation which successfully overrides a parent operation must be specified with Overriding.

***        Semantic Error              :845: Subprogram XXX does not successfully override a parent operation. In SPARK 2005, an overriding operation must successfully override an operating inherited from the parent.               

 

***        Semantic Error              :850: This construct may only be used when the Ravenscar profile is selected.

Support for concurrent features of the SPARK language, including protected objects, tasking, etc. are only supported when the Ravenscar profile is selected.

***        Semantic Error              :851: The parameter to pragma Atomic must be a simple_name.

The parameter to pragma Atomic must be a simple_name; and may not be passed using a named association.

***        Semantic Error              :852: pragma Atomic may only appear in the same immediate scope as the type to which it applies.

This is an Ada rule (pragma Atomic takes a local name see LRM 13.1(1)). Note that this precludes the use of pragma Atomic on a predefined type.

***        Semantic Error              :853: pragma Atomic may only apply to a scalar base type, or to a non-tagged record type with exactly 1 field that is a predefined scalar type.

pragma Atomic may only be applied to base types that are scalar. (i.e. enumeration types, integer types, real types, modular types) or a non-tagged record type with a single field which is a predefined scalar type, such as Integer, Character, or Boolean. As an additional special case, a record type with a single field of type System.Address is also allowed.

***        Semantic Error              :854: pragma Atomic takes exactly one parameter.

***        Semantic Error              :855: The type of own variable XXX is not consistent with its modifier.

An own variable with a task modifier must be of a task type. A task own variable must have the task modifier. An own variable with a protected modifier must be a protected object, suspension object or pragma atomic type. A protected or suspension object own variable must have the protected modifier.

***        Semantic Error              :858: A variable that appears in a protects property list may not appear in a refinement clause.

A variable in a protects list is effectively protected and hence cannot be refined.

***        Semantic Error              :859: A protected own variable may not appear in a refinement clause.

Protected state cannot be refined or be used as refinement constituents.

***        Semantic Error              :860: Own variable XXX appears in a protects list and hence must appear in the initializes clause.

Protected state (including all refinement constituents) must be initialized.

***        Semantic Error              :861: . Both abstract own variable XXX and refinement constituent YYY must have an Integrity property.

If an abstract own variable has an Integrity property, then so must all its refinement constituents, and vice-versa.

***        Semantic Error              :862: Both abstract own variable XXX and refinement constituent YYY must have the same Integrity value.

If both an abstract own variable and a refinement constituent have Integrity properties specified, then the value of the Integrity must be the same.

***        Semantic Error              :863: Own variable XXX is protected and may not appear in an initializes clause.

Protected own variables must always be initialized, and should not appear in initializes annotations.

***        Semantic Error              :864: Unexpected initialization specification - all own variables of this package are either implicitly initialized, or do not require initialization.

An own variable initialization clause and that of its refinement  constituents must be consistent.

***        Semantic Error              :865: Field XXX is part of the ancestor part of this aggregate and does not require a value here.

An extension aggregate must supply values for all fields that are part of the overall aggregate type but not those which are part of the ancestor part.

***        Semantic Error              :866: The expression in a delay_until statement must be of type Ada.Real_Time.Time.

When the Ravenscar Profile is selected, the delay until statement may be used. The argument of this statement must be of type Ada.Real_Time.Time.

***        Semantic Error              :867: Subprogram XXX contains a delay statement but does not have a delay property.

Any subprogram that may call delay until must have a delay property in a declare annotation.  Your subprogram is directly or indirectly making a call to delay until.

***        Semantic Error              :868: Protected object XXX may only be declared immediately within a library package.

This error message is issued if a type mark representing a protected type appears anywhere other than in a library level variable declaration or library-level own variable type announcement.

***        Semantic Error              :869: Protected type XXX already contains an Entry declaration; only one Entry is permitted.

The Ravenscar profile prohibits a protected type from declaring more than one entry.

***        Semantic Error              :870: Protected type XXX does not have any operations, at least one operation must be declared.

A protected type which provides no operations can never be used so SPARK requires the declaration of at least one.

***        Semantic Error              :871: A type can only be explicitly derived from a predefined Integer or Floating Point type or from a tagged record type.

***        Semantic Error              :872: Variable XXX is not protected; only protected items may be globally accessed by protected operations.

In order to avoid the possibility of shared data corruption, SPARK prohibits protected operations from accessing unprotected data items.

***        Semantic Error              :873: This subprogram requires a global annotation which references the protected type name XXX.

In order to statically-detect certain bounded errors defined by the Ravenscar profile, SPARK requires every visible operation of protected type to globally reference the abstract state of the type.

***        Semantic Error              :874: Protected state XXX must be initialized at declaration.

Because there is no guarantee that a concurrent thread that initializes a protected object will be executed before one that reads it, the only way we can be sure that a protected object is properly initialized is to do so at the point of declaration. You have either declared some protected state and not included an initialization or you have tried to initialize some protected state in package body elaboration.

***        Semantic Error              :875: Protected type expected; access discriminants may only refer to protected types in SPARK.

Access discriminants have been allowed in SPARK solely to allow devices made up of co-operating Ravenscar-compliant units to be constructed.  For this reason only protected types may appear in access discriminants.

***        Semantic Error              :876: This protected type or task declaration must include either a pragma Priority or pragma Interrupt_Priority.

To allow the static detection of certain bounded errors defined by the Ravenscar profile, SPARK requires an explicitly-set priority for each protected type, task type or object of those types.  The System.Default_Priority may used explicitly provided package System has been defined in the configuration file.

***        Semantic Error              :877: Priority values require an argument which is an expression of type integer.

***        Semantic Error              :878: This protected type declaration contains a pragma Attach_Handler and must therefore also include a pragma Interrupt_Priority.

To allow the static detetion of certain bounded errors defined by the Ravenscar profile, SPARK requires an explicitly-set priority for each protected type or object. The System.Default_Priority may used explicitly provided package System has been defined in the configuration file.

***        Semantic Error              :879: Unexpected pragma XXX: this pragma may not appear here.

pragma Interrupt_Priority must be the first item in a protected type declaration or task type declaration; pragma Priority must be the first item in a protected type declaration, task type declaration or the main program.

***        Semantic Error              :880: Pragma Priority or Interrupt_Priority expected here.

Issued when a pragma other than Priority or Interrupt_Priority appears as the first item in a protected type or task type declaration.

***        Semantic Error              :881: The priority of XXX must be in the range YYY.

See LRM D.1(17).

***        Semantic Error              :882: Integrity property requires an argument which is an expression of type Natural.

***        Semantic Error              :883: Pragma Interrupt_Handler may not be used; SPARK does not support the dynamic attachment of interrupt handlers [LRM C3.1(9)].

Interrupt_Handler is of no use unless dynamic attachment of interrupt handlers is to be used.

***        Semantic Error              :884: Pragma Attach_Handler is only permitted immediately after the corresponding protected procedure declaration in a protected type declaration.

Pragma Attach_Handler may only be used within a protected type declaration.  Furthermore, it must immediately follow a protected procedure declaration with the same name as the first argument to the pragma.

***        Semantic Error              :885: Pragma Attach_Handler may only be applied to a procedure with no parameters.

See LRM C.3.1(5).

***        Semantic Error              :887: A discriminant may only appear alone, not in an expression.

Issued when a task or protected type priority is set using an expression involving a discriminant. The use of such an expression greatly complicates the static evaluation of the priority of task or protected subtypes thus preventing the static elimination of certain Ravenscar bounded errors.

***        Semantic Error              :888: Unexpected Delay, XXX already has a Delay property.

A procedure may only have a maximum of one delay annotation.

***        Semantic Error              :889: The own variable XXX must have the suspendable property.

The type used to declare this object must be a protected type with and entry or a suspension object type.

***        Semantic Error              :890: The name XXX already appears in the suspends list.

Items may not appear more than once in an a suspends list.

***        Semantic Error              :891: Task type or protected type required.

Issued in a subtype declaration where the constraint is a discriminant constraint.  Only task and protected types may take a discriminant constraint as part of a subtype declaration.

***        Semantic Error              :892: Array type, task type or protected type required.

Issued in a subtype declaration where the constraint is a either a discriminant constraint or an index constraint (these two forms cannot always be distinguished  syntactically).  Only task and protected types may take a discriminant constraint and only array types may take an index constraint as part of a subtype declaration.

***        Semantic Error              :893: Number of discriminant constraints differs from number of known discriminants of type XXX.

Issued in a subtype declaration if too many or two few discriminant constraints are supplied.

***        Semantic Error              :894: Only variables of a protected type may be aliased.

SPARK supports the keyword aliased in variable declarations only so that protected and task types can support access discriminants.  Since it has no other purpose it may not be used except in a protected object declaration.

***        Semantic Error              :895: Attribute Access may only be applied to variables which are declared as aliased, variable XXX is not aliased.

This is a slightly annoying Ada issue.  Marking a variable as aliased prevents it being placed in a register which would make pointing at it hazardous; however, SPARK only permits 'Access on protected types which are limited and therefore always passed by reference anyway and immune from register optimization.   Requiring aliased on protected objects that will appear in discriminant constraints is therefore unwanted syntactic sugar only.

***        Semantic Error              :896: The task type XXX does not have an associated body.

Issued at the end of a package body if a task type declared in its  specification contains neither a body nor a body stub for it.

***        Semantic Error              :897: The protected type XXX does not have an associated body.

Issued at the end of a package body if a protected type declared in its specification contains neither a body nor a body stub for it.

***        Semantic Error              :898: XXX is not a protected or task type which requires a body.

Issued if a body or body stub for a task or protected type is encountered and there is no matching specification.

***        Semantic Error              :899: A body for type XXX has already been declared.

Issued if a body or body stub for a task or protected type is encountered and an earlier body has already been encountered.

***        Semantic Error              :901: Suspension object XXX may only be declared immediately within a library package specification or body.

Suspension objects must be declared at library level. They cannot  be used in protected type state or as local variables in subprograms.

***        Semantic Error              :902: Recursive use of typemark XXX in known discriminant.

***        Semantic Error              :903: Protected or suspension object types cannot be used to declare constants.

Protected and suspension objects are used to ensure integrity of shared objects. If it is necessary to share constant data then these constructs should not be used.

***        Semantic Error              :904: Protected or suspension objects cannot be used as subprogram parameters.

SPARK does not currently support this feature.

***        Semantic Error              :905: Protected or suspension objects cannot be returned from functions.

SPARK does not currently support this feature.

***        Semantic Error              :906: Protected or suspension objects cannot be used in composite types.

Protected and suspension objects cannot be used in record or array structures.

***        Semantic Error              :907: Delay until must be called from a task or unprotected procedure body.

You are calling delay until from an invalid construct. Any construct that calls delay until must have a delay property in the declare annotation. This construct must be one of a task or procedure body.

***        Semantic Error              :908: Blocking properties are not allowed in protected scope.

Procedures in protected scope must not block and therefore blocking properties are prohibited.

***        Semantic Error              :909: Object XXX cannot suspend.

You are either applying the suspendable property to an own variable that cannot suspend or you have declared a variable (whose own variable has the suspendable property) which cannot suspend. Or you have used an item in a suspends list that does not have the suspendable property. An object can only suspend if it is a suspension object or a protected type with an entry.

***        Semantic Error              :910: Name XXX must appear in the suspends list property for the enclosing unit.

Protected entry calls and calls to Ada.Synchronous_Task_Control.Suspend_Until_True may block the currently executing task. SPARK requires you announce this fact by placing the actual callee name in the suspends list for the enclosing unit.

***        Semantic Error              :911: The argument in pragma Priority for the main program must be an integer literal or a local constant of static integer value.

If the main program priority is not an integer literal then you should declare a constant that has the required value in the declarative part of the main program prior to the position of the pragma.

***        Semantic Error              :912: This call contains a delay property that is not propagated to the enclosing unit.

The call being made has a declare annotation that contains a delay property. SPARK requires that this property is propagated up the call chain and hence must appear in a declare annotation for the enclosing unit.

***        Semantic Error              :913: This call has a name in its suspends list which is not propagated to the enclosing unit.

The call being made has a declare annotation that contains a suspends list. SPARK requires that the entire list is propagated up the call chain and hence must appear in a declare annotation for the enclosing unit.

***        Semantic Error              :914: The name XXX specified in the suspends list can never be called.

You have specified the name of a protected or suspension object in the suspends list that can never be called by this procedure or task.

***        Semantic Error              :915: Procedure XXX has a delay property but cannot delay.

You have specified a delay property for this procedure but delay until can never be called from it.

***        Semantic Error              :916:  Protected object XXX has a circular dependency in subprogram YYY.

The type of the protected object mentions the protected object name in the derives list for the given subprogram.

***        Semantic Error              :917: Procedure XXX cannot be called from a protected action.

The procedure being called may block and hence cannot be called from a protected action.

***        Semantic Error              :918: The delay property is not allowed for XXX.

The delay property may only be applied to a procedure.

***        Semantic Error              :919: The priority property is not allowed for XXX.

The priority property can only be applied to protected own variables which are type announced. If the type has been declared it must be a protected type.

***        Semantic Error              :920: The suspends property is not allowed for XXX.

The suspends property may only be applied to task type specifications and procedures.

***        Semantic Error              :921: The identifier XXX is not recognised as a component of a property list.

The property list can only specify the reserved word delay, suspends or priority.

***        Semantic Error              :922: The own variable XXX must have the priority property.

In order to perform the ceiling priority checks the priority property must be given to all own variables of protected type.

***        Semantic Error              :923: The procedure XXX cannot be called from a function as it has a blocking side effect.

Blocking is seen as a side effect and hence procedures that potentially block cannot be called from functions.

***        Semantic Error              :924: The suspendable property is not allowed for XXX.

Objects that suspend must be declared as own protected variables.

***        Semantic Error              :925: The own variable or task XXX must have a type announcement.

Own variables of protected type and own tasks must have a type announcement.

***        Semantic Error              :926: Illegal declaration of task XXX. Task objects must be declared at library level.

Task objects must be declared in library level package specifications or bodies.

***        Semantic Error              :927: The own task annotation for this task is missing the name XXX in its suspends list.

The task type declaration has name XXX in its list and this must appear in the own task annotation.

***        Semantic Error              :928: Private elements are not allowed for protected type XXX.

Protected type XXX has been used to declare a protected, moded own variable. Protected, moded own variables are refined onto a set of virtual elements with the same mode. As such private elements are not allowed.

***        Semantic Error              :929: Unexpected declare annotation. Procedure XXX should have the declare annotation on the specification.

Declare annotations cannot appear on the procedure body if it appears on the procedure specification.

***        Semantic Error              :930: Task XXX does not appear in the own task annotation for this package.

A task has been declared that is not specified as an own task of the package.

***        Semantic Error              :931: Task XXX does not have a definition.

A task name appears in the own task annotation for this package but is never declared.

***        Semantic Error              :932: The priority for protected object XXX does not match that given in the own variable declaration.

The priority given in the priority property must match that given in the protected type.

***        Semantic Error              :933: A pragma Priority is required for the main program when Ravenscar Profile is enabled.

When SPARK profile Ravenscar is selected, all tasks, protected objects and the main program must explicitly be assigned a priority.

***        Semantic Error              :934: Priority ceiling check failure: the priority of YYY is less than that of XXX.

The active priority of a task is the higher of its base priority and the ceiling priorities of all protected objects that it is executing. The active priority at the point of a call to a protected operation must not exceed the ceiling priority of the callee.

***        Semantic Error              :935: The own variable XXX must have the interrupt property.

An own variable has been declared using a protected type with a pragma attach handler. Such objects are used in interrupt processing and must have the interrupt property specified in their own variable declaration.

***        Semantic Error              :936: The interrupt property is not allowed for XXX.

The interrupt property can only be applied to protected own variables that are type announced. If the type is declared then it must be a protected type that contains an attach handler.

***        Semantic Error              :937: The protects property is not allowed for XXX.

The protects property can only be applied to protected own variables that are type announced. If the type is declared then it must be a protected type.

***        Semantic Error              :938: The unprotected variable XXX is shared by YYY and ZZZ.

XXX is an unprotected variable that appears in the global list of the threads YYY and ZZZ. Unprotected variables cannot be shared between threads in SPARK. A thread is one of: the main program, a task, an interrupt handler.

***        Semantic Error              :939: The suspendable item XXX is referenced by YYY and ZZZ.

XXX is an own variable with the suspends property that appears in the suspends list of the threads YYY and ZZZ. SPARK prohibits this to prevent more than one thread being suspended on the same item at any one time. A thread is one of: the main program, a task, an interrupt  handler.

***        Semantic Error              :940: XXX is a protected own variable. Protected variables may not be used in proof contexts.

The use of protected variables in pre and postconditions or other proof annotations is not (currently) supported.  Protected variables are volatile because they can be changed at any time by another program thread and this may invalidate some common proof techniques.  The prohibition of protected variables does not prevent proof of absence of run-time errors nor proof of protected operation bodies.  See the manual "SPARK Proof Manual" for more details.

***        Semantic Error              :941: The type of own variable XXX must be local to this package.

The type used to an announce an own variable with a protects property must be declared in the same package.

***        Semantic Error              :942: Only one instance of the type XXX is allowed.

Type XXX has a protects property. This means there can be only one object in the package that has this type or any subtype of this type.

***        Semantic Error              :943: The name XXX cannot appear in a protects list.

All items in a protects list must be unprotected own variables owned by this package.

***        Semantic Error              :944: The name XXX is already protected by YYY.

The name XXX appears in more than one protects list. The first time it appeared was for own variable YYY. XXX should appear in at most one protects list.

***        Semantic Error              :945: The property XXX must be given a static expression for its value.

This property can only accept a static expression.

***        Semantic Error              :946: The own variable XXX must only ever be accessed from operations in protected type YYY.

The own variable XXX is protected by the protected type YYY and hence must never be accessed from anywhere else.

***        Semantic Error              :947: The own variable XXX appears in a protects list for type YYY but is not used in the body.

The protected type YYY claims to protect XXX via a protects property. However, the variable XXX is not used by any operation in YYY.

***        Semantic Error              :948: The type of own variable or task XXX must be a base type.

Own tasks and protected own variables of a protected type must be announced using the base type. The subsequent variable declaration may be a subtype of the base type.

***        Semantic Error              :949: Unexpected partition annotation: a global annotation may only appear here when the Ravenscar profile is selected.

When the sequential SPARK profile is selected, the global and derives annotation on the main program describes the entire program's behaviour.  No additional, partition annotation is required or permitted.  Note that an annotation must appear here if the Ravenscar profile is selected.

***        Semantic Error              :950: Partition annotation expected: a global and, optionally, a derives annotation must appear after 'main_program' when the Ravenscar profile is selected.

When the Ravenscar profile is selected the global and derives annotation on the main program describes the behaviour of the environment task only, not the entire program. An additional annotation, called the partition annotation, is required to describe the entire program's behaviour; this annotation follows immediately after 'main_program;'.

***        Semantic Error              :951: Inherited package XXX contains tasks and/or interrupt handlers and must therefore appear in the preceding WITH clause.

In order to ensure that a Ravenscar program is complete, SPARK requires that all 'active' packages inherited by the environment task also appear in a corresponding with clause.  This check ensures that any program entities described in the partition annotation are also linked into the program itself.

***        Semantic Error              :952: Subprogram XXX is an interrupt handler and cannot be called.

Interrupt handler operations cannot be called.

***        Semantic Error              :953: Interrupt property error for own variable YYY. XXX is not an interrupt handler in type ZZZ.

The handler names in an interrupt property must match one in the protected type of the own variable.

***        Semantic Error              :954: Interrupt property error for own variable XXX. Interrupt stream name YYY is illegal.

The stream name must be unprefixed and not already in use within the scope of the package.

***        Semantic Error              :955: XXX can only appear in the partition wide flow annotation.

Interrupt stream variables are used only to enhance the partition wide flow annotation and must not be used elsewhere.

***        Semantic Error              :956: XXX already appears in as an interrupt handler in the interrupt mappings.

An interrupt handler can be mapped onto exactly one interrupt stream variable. An interrupt stream variable may be mapped onto many interrupt  handlers.

***        Semantic Error              :957: Consecutive updates of protected variable XXX are disallowed when they do not depend directly on its preceding value.

A protected variable cannot be updated without direct reference to its preceding value more than once within a subprogram or task. Each update of a protected variable may have a wider effect than just the change of value of the protected variable. The overall change is considered to be the accumulation of all updates and reads of the protected variable and to preserve this information flow successive updates must directly depend on the preceding value of the variable.

***        Semantic Error              :958: A task may not import the unprotected state XXX.

A task may not imoprt unprotected state unless it is mode in. This is because under the concurrent elaboration policy, the task cannot rely on the state being initialized before it is run.

***        Semantic Error              :959: Unprotected state XXX is exported by a task and hence must not appear in an initializes clause.

Own variable XXX is being accessed by a task. The order in which the task is run and the own variable initialized is non-deterministic under a concurrent elaboration policy. In this case SPARK forces the task to perform the initialization and as such the own variable must not appear in an initializes clause.

***        Semantic Error              :960: The function Ada.Real_Time.Clock can only be used directly (1) in an assignment or return statement or (2) to initialize a library a level constant.

 

·       To avoid ordering effects, functions which globally access own variables which have modes (indicating that they are connected to the external environment) may only appear directly in assignment or return statements. They may not appear as actual parameters or in any other form of expression.

·       SPARK relaxes the illegal use of function calls in elaboration code in the case of the function Ada.Real_Time.Clock. However the function can only be used to directly initialize a constant value.

***        Semantic Error              :961: This property value is of an incorrect format.

Please check the user manual for valid property value formats.

***        Semantic Error              :986: A protected function may not call a locally-declared protected procedure.

See LRM 9.5.1 (2). A protected function has read access to the protected elements of the type whereas the called procedure has read-write access. There is no way in which an Ada compiler can determine whether the procedure will illegally update the protected state or not so the call is prohibited by the rules of Ada. (Of course, in SPARK, we know there is no function side effect but the rules of Ada must prevail nonetheless).

***        Semantic Error              :987: Task types and protected types may only be declared in package specifications.

The Examiner performs certain important checks at the whole program level such as detection of illegal sharing of unprotected state and partition-level information flow analysis. These checks require visibility of task types and protected types (especially those containing interrupt handlers).  SPARK therefore requires these types to be declare in package specifications.  Subtypes and objects of task types, protected types and their subtypes may be declared in package bodies.

***        Semantic Error              :988: Illegal re-use of identifier XXX; this identifier is used in a directly visible protected type.

SPARK does not allow the re-use of operation names which are already in use in a directly visible protected type.  The restriction is necessary to avoid overload resolution issues in the protected body.  For example, type PT in package P declares operation K. Package P also declares an operation K. From inside the body of PT, a call to K could refer to either of the two Ks since both are directly visible.

***        Semantic Error              :989: The last statement of a task body must be a plain loop with no exits.

To prevent any possibility of a task terminating (which can lead to a bounded error), SPARK requires each task to end with a non-terminating loop.  The environment task (or "main program") does not need to end in a plain loop provided the program closure includes at least one other task.  If there are no other tasks, then the environment task must be made non-terminating with a plain loop.

***        Semantic Error              :990: Unexpected annotation, a task body may have only global and derives annotations.

Issued if a pre, post or declare annotation is attached to a task body.

***        Semantic Error              :991: Unexpected task body, XXX is not the name of a task declared in this package specification.

Issued if task body is encountered for which there is no preceding declaration.

***        Semantic Error              :992: A body for task type XXX has already been declared.

Issued if a duplicate body or body stub is encountered for a task.

***        Semantic Error              :993: There is no protected type declaration for XXX.

Issued if a body is found for a protected types for which there is no preceding declaration.

***        Semantic Error              :994: Invalid guard, XXX is not a Boolean protected element of this protected type.

The SPARK Ravenscar rules require a simple Boolean guard which must be one of the protected elements of the type declaring the entry.

***        Semantic Error              :995: Unexpected entry body, XXX is not the name of an entry declared in this protected type.

Local entries are not permitted so a protected body can declare at most one entry body and that must have declared in the protected type specification.

***        Semantic Error              :996: The protected operation XXX, declared in this type, does not have an associated body.

Each exported protected operation must have a matching implementation in the associated protected body.

***        Semantic Error              :997: A body for protected type XXX has already been declared.

Each protected type declaration must have exactly one matching protected body or body stub.

***        Semantic Error              :998: There is no protected type declaration for XXX.

Issued if a protected body or body stub is found and there is no matching declaration for it.

***        Semantic Error              :999: This feature of Generics is not yet implemented.


 

6.3               Warning messages

As well as the error messages described above the Examiner may produce warnings. The presence of warning messages does not mean that the program contains errors but does indicate areas which may require additional care by the user because the construction warned about may affect the meaning of a SPARK program in ways that cannot be detected by the Examiner. Section 9 describes two ways of instructing the Examiner to summarise, rather than individually report, some or all of these warning messages; the keyword used in the warning control file to inhibit full reporting in each case is included in the explanations which follow. The possible warning messages are described below

                Warning : No semantic checks carried out, text may not be legal SPARK.

Issued when the Examiner is used solely to check the syntax of a SPARK text: this does not check the semantics of a program (e.g. the correctness of the annotations) and therefore does not guarantee that a program is legal SPARK.

In index file <filename> at line <number> column <number> duplication in index files

Issued to standard output when an entry in an index file is a duplicate of another entry (the same unit name and filename) within the same or different index file. (warning control file keyword index_manager_duplicates).

---             Warning       :1: The identifier XXX is either undeclared or not visible at this point.

This warning will appear against an identifier in a with clause if it is not also present in an inherit clause. Such an identifier cannot be used in any non-hidden part of a SPARK program. The use of with without inherit is permitted to allow reference in hidden parts of the text to imported packages which are not legal SPARK. For example, the body of SPARK_IO is hidden and implements the exported operations of the package by use of package TEXT_IO. For this reason TEXT_IO must appear in the with clause of SPARK_IO. (warning control file keyword: with_clauses).

---             Warning       :2: Representation clause - ignored by the Examiner.

The significance of representation clauses cannot be assessed by the Examiner because it depends on the specific memory architecture of the target system. Like pragmas, representation clauses can change the meaning of a SPARK program and the warning highlights the need to ensure their correctness by other means. (warning control file keyword: representation_clauses).

---             Warning       :3: Pragma - ignored by the Examiner.

All pragmas encountered by the Examiner generate this warning. While many pragmas (e.g. pragma page) are harmless others can change a program's meaning, for example by causing two variables to share a single memory location. (warning control file keyword: pragma pragma_identifier or pragma all).

---             Warning       :4: declare annotation - ignored by the Examiner.

The declare annotation is ignored by the Examiner if the profile is not Ravenscar. (warning control file keyword: declare_annotations).

---             Warning       :5: XXX contains interrupt handlers; it is important that an interrupt identifier is not used by more than one handler.

Interrupt identifiers are implementation defined and the Examiner cannot check that values are used only once.  Duplication can occur by declaring more than object of a single (sub)type where that type defines handlers.  It may also occur if interrupt identifiers are set via discriminants and two or more actual discriminants generate the same value. (warning control file keyword: interrupt_handlers).

---             Warning       :6: Machine code insertion.  Code insertions are ignored by the Examiner.

Machine code is inherently implementation dependent and cannot be analysed by the Examiner.  Users are responsible for ensuring that the behaviour of the inserted machine code matches the annotation of the subprogram containing it.

---             Warning       :7: This identifier is an Ada2005 reserved word.

Such identifiers will be rejected by an Ada2005 compiler and by the Examiner for SPARK2005. It is recommended to rename such identifiers for future upward compatibility. (warning control file keyword: ada2005_reserved_words).

---             Warning       :9: The body of XXX has a hidden exception handler - analysis and verification of contracts for this handler have not been performed.

Issued when a --# hide XXX annotation is used to hide a user-defined exception handler.  (warning control file keyword: handler_parts).

---             Warning       :10: XXX is hidden - hidden text is ignored by the Examiner.

Issued when a --# hide XXX annotation is used.  (warning control file keyword: hidden_parts).

---             Warning       :11: Unnecessary others clause - case statement is already complete.

The others clause is non-executable because all case choices have already been covered explicitly.  If the range of the case choice is altered later then the others clause may be executed with unexpected results.  It is better to omit the others clause in which case any extension of the case range will result in a compilation error.

---             Warning       :12: Function XXX is an instantiation of Unchecked_Conversion.

See ALRM 13.9.  The use of Unchecked_Conversion can result in implementation-defined values being returned.  The function should be used with great care.  The principal use of Unchecked_Conversion is SPARK programs is the for the reading of external ports prior to performing a validity check; here the suppression of contraint checking prior to validation is useful.  The Examiner does not assume that the value returned by an unchecked conversion is valid and so unprovable run-time check VCs will result if a suitable validity check is not carried out before the value is used. (warning control file keyword: unchecked_conversion).

---             Warning       :13: Function XXX is an instantiation of Unchecked_Conversion returning a type for which run-time checks are not generated.  Users must take steps to ensure the validity of the returned value.

See ALRM 13.9.  The use of Unchecked_Conversion can result in invalid values being returned.  The function should be used with great care especially, as in this case, where the type returned does not generate Ada run-time checks nor SPARK run-time verification conditions.  For such types, this warning is the ONLY reminder the Examiner generates that the generated value may have an invalid representation. For this reason the warning is NOT supressed by the warning control file keyword unchecked_conversion. The principal use of Unchecked_Conversion is SPARK programs is the for the reading of external ports prior to performing a validity check; here the suppression of contraint checking prior to validation is useful.

---             Warning       :120: Unexpected unmatched 'end accept' annotation ignored.

This end accept annotation does not match any preceding start accept in this unit.

---             Warning       :121: No warning message matches this accept annotation.

The accept annotation is used to indicate that a particular flow error or semantic warning message is expected and can be justified.  This error indicates that the expected message did not actually occur.  Note that when matching any information flow error messages containing two variable names, the export should be placed first and the import second (the order in the error message may differ from this depending on the style of information flow error reporting selected).  For example: --# accept Flow, 601, X, Y, "..."; justifies the message: "X may be derived from the imported value(s) of Y" or the alternative form: "Y may be used in the derivation of X".

---             Warning       :122: Maximum number of error or warning justifications reached, subsequent accept annotations will be ignored.

The number of justifications per source file is limited.  If you reach this limit it is worth careful consideration of why the code generates so many warnings.  If reducing the number really is infeasible then you should contact Praxis HIS for advice.

---             Warning       :169: Direct update of own variable XXX, which is an own variable of a non-enclosing package.

With the publication of Edition 3.1 of the SPARK Definition the previous restriction prohibiting the direct updating of own variables of non-enclosing packages was removed; however, the preferred use of packages as abstract state machines is compromised by such action which is therefore discouraged. (warning control file keyword: direct_updates).

---             Warning       :200: This static expression cannot be evaluated by the Examiner.

Issued if a static expression exceeds the internal limits of the Examiner because its value is, for example, too large to be evaluated using infinite precision arithmetic. No value will be recorded for the expression and this may limit the Examiner's ability to detect certain sorts of errors such as numeric constraints. (warning control file keyword: static_expressions).

---             Warning       :201: This expression cannot be evaluated statically because its value may be implementation-defined.

Raised, for example, when evaluating 'Size of a type that does not have an explicit Size representation clause. Attributes of implementation-defined types, such as Integer'Last may also be unknown to be Examiner if they are not specified in the configuration file (warning control file keyword: static_expressions).

---             Warning       :202: An arithmetic overflow has ocurred. Constraint checks have not been performed.

Raised when comparing two real numbers. The examiner cannot deal with real numbers specified to such a high degree of precision. Consider reducing the precision of these numbers.

---             Warning       :300: VCs cannot be built for multi-dimensional array aggregates.

Issued when an aggregate of a multi-dimensional array is found. Suppresses generation of VCs for that subprogram. Can be worked round by using arrays of arrays.

---             Warning       :301: Called subprogram exports abstract types for which RTCs are not possible.

---             Warning       :302: This expression may be re-ordered by a compiler. Add parentheses to remove ambiguity.

Issued when a potentially re-orderable expression is encountered.  For example x := a + b + c; Whether intermediate sub-expression values overflow may depend on the order of evaluation which is compiler-dependent. Therefore, code generating this warning should be parenthesised  to remove the ambiguity. e.g. x := (a + b) + c;. This warning may be suppressed with the warning control file keyword expression_reordering if VCs are not being generated.

---             Warning       :303: Overlapping choices may not be detected.

Issued where choices in an array aggregate or case statement are outside the range which can be detected because of limits on the size of a table internal to the Examiner.

---             Warning       :304: Case statement may be incomplete.

Issued when the Examiner cannot determine the completeness of a case statement because the bounds of the type of the controlling expression exceed the size of the internal table used to perform the checks.

---             Warning       :305: Value too big for internal representation.

Issued when the Examiner cannot determine the completeness of an array aggregate or case statement because the number used in a choice exceed the size allowed in the internal table used to perform the checks.

---             Warning       :306: Aggregate may be incomplete.

Issued when the Examiner cannot determine the completeness of an array aggregate because its bounds exceed the size of the internal table used to perform the checks.

---             Warning       :307: Completeness checking incomplete: index type(s) undefined or not discrete.

Issued where the array index (sub)type is inappropriate: this is probably because there is an error in its definition, which will have been indicated by a previous error message.

---             Warning       :308: Use of equality operator with floating point type.

The use of this operator is discouraged in SPARK because of the difficulty in determining exactly what it means to say that two instances of a floating point number are equal.

---             Warning       :309: Unnecessary type conversion to own type.

Issued where a type conversion is either converting from a (sub)type to the same (sub)type or is converting between two subtypes of the same type. In the former case the type conversion may be safely removed because no constraint check is required; in the latter case the type conversion may be safely replaced by a type qualification which preserves the constraint check.(warning control file keyword: type_conversions).

---             Warning       :310: Use of obsolescent Ada 83 language feature.

Issued when a language feature defined by Ada 95 to be obsolescent is used.  Use of such features is not recommended because compiler support for them cannot be guaranteed.(warning control file keyword:obsolescent_features).

---             Warning       :311: Priority pragma for XXX is unavailable and has not been considered in the ceiling priority check.

---             Warning       :312: Replacement rules cannot be built for multi-dimensional array constant XXX.

Issued when a VC or PF references a multi-dimensional array constant. Can be worked round by using arrays of arrays.

---             Warning       :313: The constant XXX has semantic errors in its initializing expression or has a hidden completion which prevent generation of a replacement rule.

Issued when replacement rules are requested for a composite constant which had semantic errors in its initializing expression, or is a deferred constant whose completion is hidden from the Examiner. Semantic errors must be eliminated before replacement rules can be generated.

---             Warning       :314: The constant XXX has semantic errors in its type which prevent generation of rules.

Issued when an attempt is made to generate type deduction rules for a constant which has semantic errors in its type.  These semantic errors must be eliminated before type deduction rules can be generated.

---             Warning       :350: Unexpected pragma Import.  Variable XXX is not identified as an external (stream) variable.

The presence of a pragma Import makes it possible that the variable is connected to some external device.  The behaviour of such variables is best captured by making them moded own variables (or "stream" variables).  If variables connected to the external environment are treated as if they are normal program variables then misleading analysis results are inevitable.  The use of pragma Import on local variables of subprograms is particularly deprecated. The warning may safely be disregarded if the variable is not associated with memory-mapped input/output or if the variable concerned is an own variable and the operations on it are suitably annotated to indicate volatile, stream-like behaviour. Where pragma Import is used, it is essential that the variable is properly initialized at the point from which it is imported. (warning control file keyword:imported_objects).

---             Warning       :351: Unexpected address clause. XXX is a constant.

Great care is needed when attaching an address clause to a constant.  The use of such a clause is safe if, and only if, the address supplied provides a valid value for the constant which does not vary during the execution life of the program, for example, mapping the constant to PROM data. If the address clause causes the constant to have a value which may alter, or worse, change dynamically under the influence of some device external to the program, then misleading or incorrect analysis is certain to result. If the intention is to create an input port of some kind, then a constant should not be used.  Instead a moded own variable (or "stream" variables) should be used. (warning control file keyword: address_clauses).

---             Warning       :380: Casing inconsistent with declaration. Expected casing is XXX.

The Examiner checks the case used for an identifier against the declaration of that identifier and warns if they do not match (warning control file keyword: style_check_casing).

---             Warning       :388: The validity of instantiating the generic array parameter XXX is not properly checked by the Examiner yet but it will be fully checked by an Ada compiler.

---             Warning       :389: Generation of VCs for consistency of generic and instantiated subprogram constraints is not yet supported. It will be supported in a future release of the Examiner.

---             Warning       :390: This generic subprogram has semantic errors in its declaration which prevent instantiations of it.

Issued to inform the user that a generic subprogram instantiation cannot be completed because of earlier errors in the generic declaration.

---             Warning       :391: If the identifier XXX represents a package which contains a task or an interrupt handler then the partition-level analysis performed by the Examiner will be incomplete.  Such packages must be inherited as well as withed.

---             Warning       :392: External variable XXX may have an invalid representation.

Where values are read from external variables (i.e. variables connected to the external environment) there is no guarantee that the bit pattern read will be a valid representation for the type of the external variable.  Unexpected behaviour may result if invalid values are used in expressions.  For SPARK 95 the use of attribute 'Valid is strongly recommended (see ALRM 13.9.2).  For SPARK 83 external variables should always be read into variables of a type for which any bit pattern would be a valid representation (e.g. a "word")  and then range checked before conversion to the actual type desired.  Note that when the Examiner is used to generate run-time checks, it will not be possible to discharge those involving external variables unless these steps are taken. Boolean external variables require special care since the Examiner does not generate run-time checks for Boolean variables; use of 'Valid is essential when reading Boolean external variables. More information on interfacing can be found in the INFORMED manual. (warning control file keyword: external_assignment).

---             Warning       :393: External variable XXX may have an invalid representation and is of a type for which run-time checks are not generated.  Users must take steps to ensure the validity of the assigned or returned value.

Where values are read from external variables (i.e. variables connected to the external environment) there is no guarantee that the bit pattern read will be a valid representation for the type of the external variable.  Unexpected behaviour may result if invalid values are used in expressions.  Where, as in this case, the type is one for which neither Ada run-time checks nor SPARK run-time verification conditions are generated, extra care is required.  For such types, this warning is the ONLY reminder the Examiner generates that the external value may have an invalid representation. For this reason the warning is NOT supressed by the warning control file keyword external_assignment. For SPARK 95 the use of attribute 'Valid is strongly recommended (see ALRM 13.9.2).  For SPARK 83 external variables should always be read into variables of a type for which any bit pattern would be a valid representation (e.g. a "word")  and then range checked before conversion to the actual type desired.  Note that when the Examiner is used to generate run-time checks, it will not be possible to discharge those involving external variables unless these steps are taken. Boolean external variables require special care since the Examiner does not generate run-time checks for Boolean variables; use of 'Valid is essential when reading Boolean external variables. More information on interfacing can be found in the INFORMED manual.

---             Warning       :394: Variables of type XXX cannot be initialized using the facilities of this package.

A variable of a private type can only be used (without generating a data flow error) if there is some way of giving it an initial value.  For a limited private type only a procedure that has an export of that type and no imports of that type is suitable.  For a private type either a procedure, function or (deferred) constant is required.  The required facility may be placed in, or already available in, a public child package. (warning control file keyword: private_types).

---             Warning       :395: Variable XXX is an external (stream) variable but does not have an address clause or a pragma import.

When own variables are given modes they are considered to be inputs from or outputs to the external environment.  The Examiner regards them as being volatile (i.e. their values can change in ways not visible from an inspection of the source code).  If a variable is declared in that way but it is actually an ordinary variable which is NOT connected to the environment then misleading analysis is inevitable. The Examiner expects to find an address clause or pragma import for variables of this kind to indicate that they are indeed memory-mapped input/output ports.  This warning is issued if an address clause or pragma import is not found.

---             Warning       :396: Unexpected address clause.  Variable XXX is not identified as an external (stream) variable.

The presence of an address clause makes it possible that the variable is connected to some external device.  The behaviour of such variables is best captured by making them moded own variables (or "stream" variables).  If variables connected to the external environment are treated as if they are normal program variables then misleading analysis results are inevitable.  The use of address clauses on local variables of subprograms is particularly deprecated. The warning may safely be disregarded if the variable is not associated with memory-mapped input/output or if the variable concerned is an own variable and the operations on it are suitably annotated to indicate volatile, stream-like behaviour. (warning control file keyword: address_clauses).

---             Warning       :397: Variables of type XXX can never be initialized before use.

A variable of a private type can only be used (without generating a data flow error) if there is some way of giving it an initial value.  For a limited private type only a procedure that has an export of that type and no imports of that type is suitable.  For a private type either a procedure, function or (deferred) constant is required.

---             Warning       :398: The own variable XXX can never be initialized before use.

The own variable can only be used (without generating a data flow error) if there is some way of giving it an initial value.  If it is initialized during package elaboration (or implicitly by the environment  because it represents an input port) it should be placed in an "initializes" annotation. Otherwise there needs to be some way of assigning an initial value during program execution.  Either the own variable needs to be declared in the visible part of the package so that a direct assignment can be made to it or, more usually, the package must declare at least one procedure for which the own variable is an export but not an import. Note that if the own variable is an abstract own variable with some constituents initialized during elaboration and some during program execution then it will never be possible correctly to initialize it; such abstract own variables must be divided into separate initialized and uninitialized components.

---             Warning       :399: The called subprogram has semantic errors in its interface (parameters and/or annotations) which prevent flow analysis of this call.

Issued to inform the user that flow analysis has been suppressed because of the error in the called subprogram's interface.

---             Warning       :400: Variable XXX is declared but not used.

Issued when a variable declared in a subprogram is neither referenced, nor updated. (warning control file keyword: unused_variables).

---             Warning       :402: Default assertion planted to cut loop.

In order to prove properties of code containing loops, the loop must be "cut" with a suitable assertion statement.  When generating run-time checks, the Examiner inserts a simple assertion to cut any loops which do not have one supplied by the user.  The assertion is placed at the point where this warning appears in the listing file.  The default assertion asserts that the subprogram's precondition (if any) is satisfied, that all imports to it are in their subtypes and that any for loop counter is in its subtype.  In many cases this provides sufficient information to complete a proof of absence of run-time errors.  If more information is required, then the user can supply an assertion and the Examiner will append the above information to it. (warning control file keyword: default_loop_assertions).

---             Warning       :403: XXX is declared as a variable but used as a constant.

XXX is a variable which was initialized at declaration but whose value is only ever read not updated; it could therefore have been declared as a constant. (warning control file keyword: constant_variables).

---             Warning       :404: Subprogram imports variables of abstract types for which run-time checks cannot be generated.

---             Warning       :405: VCs for statements including real numbers are approximate.

The Examiner generates VCs associated with real numbers using perfect arithmetic rather than the machine approximations used on the target platform.  It is possible that rounding errors might cause a Constraint_Error even if these run-time check proofs are completed satisfactorily. (warning control file keyword: real_rtcs).

---             Warning       :407: This package requires a body.  Care should be taken to provide one because an Ada compiler will not detect its omission.

Issued where SPARK own variable and initialization annotations make it clear that a package requires a body but where no Ada requirement for a body exists.

---             Warning       :408: VCs could not be generated for this subprogram owing to semantic errors in its specification or body. Unprovable (False) VC generated.

Semantic errors prevent VC Generation, so a single False VC is produced. This will be detected and reported by POGS.

---             Warning       :409: VCs could not be generated for this subprogram due to its size and/or complexity exceeding the capacity of the VC Generator.  Unprovable (False) VC generated.

A subprogram that has excessive complexity of data structure or number of paths may cause the VC Generator to exceed its capacity. A single False VC is generated in this case to make sure this error is detected in subsequent proof and analysis with POGS.

---             Warning       :410: Task or interrupt handler XXX is either unavailable (hidden) or has semantic errors in its specification which prevent partition-wide flow analysis being carried out.

Partition-wide flow analysis is performed by checking all packages withed by the main program for tasks and interrupt handlers and constructing an overall flow relation that captures their cumulative effect.  It is for this reason that SPARK requires task and protected types to be declared in package specifications.  If a task or protected type which contains an interrupt handler, is hidden from the Examiner (in a hidden package privat part) or contains errors in it specification, the partition-wide flow analysis cannot be constructed correctly and is therefore supressed. Correct the specification of the affected tasks and (temporarily if desired) make them visible to the Examiner.

---             Warning       :411: Task type XXX is unavailable and has not been considered in the shared variable check.

The Examiner checks that there is no potential sharing of unprotected data between tasks.  If a task type is hidden from the Examiner in a hidden package private part, then it is not possible to check whether that task may share unprotected data.

---             Warning       :412: Task type XXX is unavailable and has not been considered in the max-one-in-a-queue check.

The Examiner checks that no more than one task can suspend on a single object.  If a task is hidden from the Examiner in a hidden package private part, then it is not possible to check whether that task may suspend on the same object as another task.

---             Warning       :413: Task or main program XXX has errors in its annotations. The shared variable and max-one-in-a-queue checks may be incomplete.

The Examiner checks that no more than one task can suspend on a single object and that there is no potential sharing of unprotected data between tasks.  These checks depend on the accuracy of the annotations on the task types withed by the main program.  If these annotations contain errors, then any reported violations of the shared variable and max-one-in-a-queue checks will be correct; however, the check may be incomplete.  The errors in the task annotations should be corrected.

---             Warning       :415: The analysis of generic packages is not yet supported. It will be supported in a future release of the Examiner.

---             Warning       :420: Instance of SEPR 2124 found. An extra VC will be generated here and must be discharged to ensure absence of run-time errors. Please contact Altran Praxis for assistance with this issue.

In release 7.5 of the Examiner, a flaw in the VC generation was fixed such that subcomponents of records and elements of arrays when used as “out” or “in out” parameters will now generate an additional VC to verify absence of run-time errors. This warning flags an instance of this occurrence. Please read the release note and/or contact Praxis for more information.

---             Warning       :425: The -vcg switch should be used with the selected language profile.

A code generator language profile such as KCG is in use and so conditional flow errors may be present in the subprogram. Therefore the -vcg switch must be used to generate VCs and the VCs related to definedness discharged using the proof tools.

---             Warning       :426: The with_clause contains a reference to a public child of the package. The Examiner will not detect mutual recursion between subprograms of the two packages.

A code generator language profile such as KCG allows a package body to with its own public child which is not normally permitted in SPARK. The removal of this restriction means that the Examiner will not detect mutual recursion between subprograms declared in the visible parts of the package and its descendant. The code generator is expected to guarantee the absence of recursion.

---             Warning       :430: SLI generation abandoned owing to syntax or semantic errors or multiple units in a single source file (warning control file keyword: sli_generation).


 

6.4               Notes

The Examiner may also produce the following messages, which draw the user's attention to points which are considered worthy of note, but which are considered to be less serious than the warnings described above.

Some of these messages may be suppressed by the use of the warning control file as described in section 4.3 where indicated below.

                Note: Ada 83 language rules selected.

Issued when the Examiner is used in SPARK 83 mode.

                Note: Information flow analysis not carried out.

This is issued as a reminder that information flow analysis has not been carried out in this run of the Examiner: information flow errors may be present undetected in the text analysed.

---             Note             :1: This dependency relation was not used for this analysis and has not been checked for accuracy.

Issued when information flow analysis is not performed and when modes were specified in the global annotation. It is a reminder that the dependencies specified in this annotation (including whether each variable is an import or an export) have not been checked against the code, and may therefore be incorrect. (warning control file keyword: notes).

---             Note             :2: This dependency relation has been used only to identify imports and exports, dependencies have been ignored.

Issued as a reminder when information flow analysis is not performed in SPARK 83. The dependencies specified in this annotation have not been checked against the code, and may therefore be incorrect. (warning control file keyword: notes).

---             Note             :3: The deferred constant Null_Address has been implicitly defined here.

Issued as a reminder that the declaration of the type Address within the target configuration file implicitly defines a deferred constant of type Null_Address. (warning control file keyword: notes).

---             Note             :4: The constant Default_Priority, of type Priority, has been implicitly defined here.

Issued as a reminder that the declaration of the subtype Priority within the target configuration file implicitly defines a constant Default_Priority, of type Priority, with the value (Priority'First + Priority'Last) / 2. (warning control file keyword: notes).

 

 

7                       Control flow analysis

7.1               General description

Since Ada is already rich in control structures, it is possible to remove its goto statement without unreasonably hindering the programmer. Then, to ensure that control structures are "well-formed", for analysis purposes, it suffices to place minor restrictions on the placement of exit statements, and return statements. (Precise details are given in the SPARK Definition.) Checks that exit and return statements have been correctly placed are performed directly on the abstract syntax tree of a SPARK text.

7.2               Error messages

Exit statements that violate the rules given in the SPARK Definition are always indicated by the following message:

***        Illegal Structure             :1: An exit statement may not occur here.

Exit statements must be of the form "exit when c;" where the closest enclosing statement is a loop or "if c then S; exit;" where the if statement has no else part and its closest enclosing statement is a loop.  See the SPARK Definition for details.

Violations of the rules concerning return statements are indicated by messages of the following kinds:

***        Illegal Structure             :2: A return statement may not occur here.

A return statement may only occur as the last statement of a function.

***        Illegal Structure             :3: The last statement of this function is not a return statement.

SPARK requires that the last statement of a function be a return statement.

***        Illegal Structure             :4: Return statements may not occur in procedure subprograms.

8                       Data and information flow analysis

8.1               General description

The Examiner performs data-flow and information-flow analysis by computing flow relations, based on the principles described in the paper in Appendix A (though the recurrence relations used to compute the flow relations of SPARK loop structures are rather more complex). Data-flow and information-flow errors are detected by mechanical inspection of these relations.

This section describes the different modes of flow analysis and the annotations that are required for each mode.

8.2               Information flow analysis

Information flow analysis is enabled by default. In this mode the Examiner computes the actual dependency relation for a subprogram[2] body, based on an analysis of the code, and checks that it satisfies the dependency relation specified by the derives annotation. Any discrepancies found are reported as flow errors.

8.3               Data flow analysis

8.3.1           Description

This feature can be used for analysis of SPARK95 or SPARK2005 programs where global modes have been provided. The syntax for moded global annotations is described in the SPARK LRM. For example:

 

procedure P(X : in     T;

            Y :    out T);

--# global in     A;

--#        in out B, C;

--#           out D;

Provided global modes are present and the Examiner is operating in SPARK95 or SPARK2005 mode, data flow analysis can be selected using the -flow_analysis=data command line option described in Section 3.1.4.  When selected, the Examiner will ignore any derives annotations it encounters.  A note to this effect is generated but it can be suppressed if desired by using the warning file mechanism described in Section 4.6.  Instead of using the derives annotation, the Examiner determines whether each parameter and global is an import or an export by checking its mode.  Data flow analysis is then conducted using this mode information.  The validity of the data flow errors listed in Section 8.7.1 is completely unaffected.  Other error messages generated also remain valid; however, some errors normally found by information flow analysis may no longer be detected.  For example, if the Examiner (in data flow analysis mode) detects a stable loop (see Section 8.7.3) then the loop is stable; however, not all cases of loop stability detectable by information flow analysis will necessarily be indicated.  The specific information flow error messages described in Section 8.7.4 will not appear at all.

Note that functions never have derives annotations because they only ever have a single export (the return value) which must always be derived from all of the imports. Functions are always subject to information flow analysis regardless of the selected flow analysis option.

It is possible to generate verification conditions, including run-time checks and dead path conjectures, in conjunction with the data flow analysis option; information flow analysis is not a prerequisite for these forms of analysis.

8.3.2           SPARK83 data flow analysis

In SPARK83, derives annotations are mandatory and globals do not have modes. If the data flow analysis option is selected when operating the Examiner in SPARK 83 mode then the derives annotations are used to infer the modes of any globals. However, the Ada 83 language rules prevent the modes of a parameter being sufficient to determine whether it is an import or an export so there are certain subtleties to be aware of.  Users who have a need for data flow analysis of SPARK 83 programs should contact Altran Praxis for guidance.

8.4               Automatic selection of flow analysis mode

The Examiner can determine the flow analysis mode to be used automatically for each subprogram, based on the presence or absence of derives annotations. This behaviour can be selected using the ‑flow_analysis=auto command line option described in Section 3.1.4. When this option is selected, if a procedure, task or entry has a derives annotation then it will be analysed in information-flow mode when it comes to cross-checking the derives annotation against the implementation. If there is no derives annotation then data-flow analysis will be performed. This gives developers the flexibility of using information-flow analysis only for those parts of the program where it is required, for example lower in the calling hierarchy where the derives annotations tend to be smaller and easier to maintain and add the most value, whilst using data-flow analysis for other areas of the same program. It also gives the option of developing all or part of a program with moded global annotations to begin with and adding the derives annotations at a later stage if necessary.

The table below summarises how automatic flow analysis works for various cases of subprograms with and without derives annotations calling other subprograms with and without derives annotations. For functions (which never have derives annotations) the term “implied dependency” represents the dependency relation “the function’s single return value is derived from all of the function’s imports”.


 

 

 

 

 

called subprogram

procedure Q with derives annotation

procedure Q without derives annotation

function G

calling subprogram

procedure P with derives annotation

Body of P analysed in information flow mode and checked against P’s derives annotation. Analysis of call to Q uses Q’s derives annotation.

Illegal. Semantic error 176 is raised (see note 1 below).

Body of P analysed in information flow mode and checked against P’s derives annotation. Analysis of call to G uses G’s implied dependency.

procedure P without derives annotation

Body of P analysed in data flow mode. Analysis of call to Q uses Q’s moded globals for data flow analysis. (Q’s derives annotation is ignored.)

Body of P analysed in data-flow mode. Analysis of call to Q uses Q’s moded globals for data flow analysis.

Body of P analysed in data flow mode. Analysis of call to G uses G’s implied dependency relation.

function F

Body of F analysed in information flow mode and checked against F’s implied dependency. Analysis of call to Q uses Q’s derives annotation.

Illegal. Semantic error 176 is raised (see note 1 below).

Body of F analysed in information flow mode and checked against F’s implied dependency. Analysis of call to G uses G’s implied dependency.

 

There are some additional rules and notes to be aware of when using automatic flow analysis mode.

1.       A procedure with a derives annotation, or a function, is not permitted to call a procedure without a derives annotation. The Examiner reports a semantic error if such a call is detected. This is because the more detailed information flow analysis of the calling subprogram cannot be performed using the less detailed specification of the called subprogram. If this were permitted then spurious flow errors could result or, worse, genuine flow errors could be overlooked. Consider the following example.

  procedure Add_And_Subtract_Private (X, Y : in T; Add, Sub : out T)

  is

  begin

    Add := X + 1;

    Sub := Y - 1;

  end Add_And_Subtract_Private;

 

  procedure Add_And_Subtract (X, Y : in T; Add, Sub : out T)

  --# derives Add, Sub from X, Y;

  is

  begin

    Add_And_Subtract_Private (X, Y, Add, Sub);

  end Add_And_Subtract;

Assuming that Add_And_Subtract_Private does not have a separate specification with a derives annotation, when it is called by Add_And_Subtract the Examiner will make the pessimistic assumption that each out parameter is derived from both in parameters. (In this case we happen to know this is wrong because we can see the body of Add_And_Subtract_Private but it might be hidden, or not yet written, or written in another language that is being interfaced to.) So if such a call was permitted the Examiner would analyse Add_And_Subtract_Private in data-flow mode and report no errors, then it would analyse Add_And_Subtract in information-flow mode and report no errors. To avoid this counter-intuitive behaviour the call to Add_And_Subtract_Private is considered to be illegal and semantic error 176 will be raised. This would need to be resolved by adding an appropriate derives annotation to Add_And_Subtract_Private.

Therefore the general approach when using automatic flow analysis is to have full derives annotations lower in the calling hierarchy and (optionally) to omit the derives annotations higher in the calling hierarchy but not vice-versa. It is also possible to split a program into vertical slices (partitions) in which all procedures do or do not have derives annotations (perhaps based on an assessment of the criticality of the functionality in each slice).

Note that functions are considered to have derives annotations for the purposes of this analysis, because they have implicit dependencies and are analysed in information flow mode.

2.       If abstract own variable refinement is used in conjunction with automatic flow analysis mode then the  following rules apply:

a.       If the abstract version of the annotation includes a derives annotation then the refined version must also have a derives annotation.

b.       If the abstract version of the annotation has no derives annotation then the refined version must not have a derives annotation.

3.       Variables appearing in global annotations are required to have modes when automatic flow analysis is selected. Specifying modes on globals is considered to be general good practice in SPARK. If an existing program has derives annotations and unmoded globals then the SPARKFormat tool can be used to automatically insert the modes on the globals.

4.       Automatic flow analysis mode is not currently compatible with safety and security policy checking.

5.       Automatic flow analysis mode is not compatible with SPARK83 mode.

8.5               Recommended use of derives annotations

For new developments we recommend that global modes are supplied and that derives annotations are also supplied except where, high in the calling tree, their size and complexity is judged to make them of limited value. This approach allows full information flow analysis to be performed lower in the calling hierarchy while still permitting full protection from data flow errors and language violations at the top of the calling tree (e.g. in the system main loop or scheduler).  Additionally, the use of moded globals as a substitute for derives annotations may be appropriate for some, perhaps less critical, systems. If a program is partitioned so that only some of the subprograms have derives annotations then it should be analysed using automatic flow analysis and the structure should follow the guidance given in section 8.4 above.

8.6               Checking safety and security policies using flow analysis

The Examiner can check specific information-flow policies, using the Integrity property of an own variable to “label” the integrity of an input, output, or state variable.

Two policies are predefined called “security” and “safety” – these are described in section 8.6.2.1 and 8.6.2.2 below respectively. Other policies may be defined in future, or may be added by users through modification of the Examiner itself.

8.6.1           The own variable Integrity property

The own variable annotation has an optional extension that is used to introduce the notion of a “property” of such a variable. The Examiner supports a property called “Integrity” that forms the basis of this analysis.

Integrity is a name_property in the grammar. The name on the left must be “Integrity” and the value on the right must be a static expression of type Natural. The use of type Natural allows an ordered relationship between Integrities of own variables – i.e. one Integrity may be interpreted as greater than another.

Example:

package Sensor

--# own in Critical_Input (Integrity => 4);

is

   …

end Sensor;

It may be useful, though, to define a library-level package that declares named constants to represent the required Integrity levels with meaningful names. For example:

package Integrities

is

   Top_Secret   : constant Natural := 4;

   Secret       : constant Natural := 3;

   Restricted   : constant Natural := 2;

   Unclassified : constant Natural := 1;

end Integrities;

Or perhaps:

package DS0055

is

   SIL4 : constant Natural := 4;

   SIL3 : constant Natural := 3;

   SIL2 : constant Natural := 2;

   SIL1 : constant Natural := 1;

   SIL0 : constant Natural := 0;

end DS0055;

then

--# inherit Integrities;

package Sensor

--# own in Critical_Input (Integrity => Integrities.Top_Secret);

is

   …

end Sensor;

8.6.1.1        Own variable refinement and Integrities

Refinement of own variables is permitted as normal, with the following additional rules:

·          If an abstract own variable has an Integrity property, but a refinement constituent (which might be an own variable of a nested package or a private child package) does not, then the constituent inherits the same Integrity as the abstract own variable.

·          If both an abstract own variable and one of its refinement constituents have an Integrity property, then the value of the Integrity properties must be identical. If this test fails, then a semantic error 862 is raised:

***        Semantic Error              :862: Both abstract own variable XXX and refinement constituent YYY must have the same Integrity value.

If both an abstract own variable and a refinement constituent have Integrity properties specified, then the value of the Integrity must be the same.

·          If an abstract own variable does not have an Integrity property, but a refinement constituent does, then semantic error 861 is raised.

***        Semantic Error              :861: . Both abstract own variable XXX and refinement constituent YYY must have an Integrity property.

If an abstract own variable has an Integrity property, then so must all its refinement constituents, and vice-versa.

8.6.2           Policy checking for information flow analysis

Given a program which has been annotated with suitable Integrity properties, the Examiner may then enforce various policies as it performs information-flow analysis.

The policy is chosen with the Examiner’s –policy switch. The default is no policy, in which case no checking of Integrities is performed at all.

If the policy switch is used, then –flow=information mode must also be selected. When
–flow=data is selected, the policy switch may not be used.

The Examiner supports two predefined policies, described below.  If a violation of the selected policy is found, then flow error 57 is reported (see section 8.7.6).

8.6.2.1        Security policy

This policy is selected with –policy=security. In this mode, information may not flow from an own variable (or input) with a higher valued Integrity to an own variable (or output) with a lower-valued Integrity.

Intuitively, this policy reflects that

·          “Top_Secret” input or state may flow to “Top_Secret” state or output.

·          “Unclassified” input or state may flow to “Top_Secret” state or output.

·          “Top_Secret” input or state may not flow to “Unclassified” state or output.

8.6.2.2        Safety policy

When –policy=safety is active, this intuition is reversed. In this mode, information may not flow from a lower-valued “untrusted” own variable or input to a higher-valued “safety critical” own variable or output. Using the labels from the DS0055 package above, this would be interpreted as:

·          “SIL4” input or state may flow to “SIL4” state or output.

·          “SIL4” input or state may flow to “SIL1” state or output.

·          “SIL1” input or state may not flow to “SIL4” state or output.

8.6.3           Limitations

There are two important limitations that must be considered if policy checking is to be used effectively.

8.6.3.1        Absence of data-flow errors

Programs must be free from unconditional or conditional data-flow errors, such as reading the value of an uninitialized variable. Such defects form a covert channel through which information can flow, and so must be prevented.

8.6.3.2        Treatment of constants

Currently, the flow-analyser does not track the value of a constant object in SPARK. If you wish to include the value of a constant in policy checking, then it must be promoted to be a “read-only” own variable, so that an appropriate Integrity property can be defined for it.

For example, the constant

package Keys

is

   Root_Signing_Key : constant Key := 16#DEADBEEF#;

end Keys;

Cannot be tracked with the flow analyser. If its value is to be tracked and checked, then it should be promoted to an own variable, thus:

package Keys

--# own Root_Signing_Key (Integrity => Integrities.Top_Secret);

--# initializes Root_Signing_Key;

is

   function Get_Key return Key;

   --# global in Root_Signing_Key;

end Keys;

 

package body Keys

is

   Root_Signing_Key : Key := 16#DEADBEEF#;

 

   function Get_Key return Key

   is

   begin

      return Root_Signing_Key;

   end Get_Key;

end Keys;

Note the use of a parameterless function here to ensure the concrete variable is encapsulated in the package body, thus preventing any client from changing its value.

8.7               Error messages

Data-flow and information-flow errors may be either unconditional or conditional. An error is unconditional if it applies to all paths through a program (and consequently to all its possible executions). In signalling flow errors of this kind, error messages always begin with three exclamation marks (!!!). A conditional flow error is one that applies to some but not all program paths. To determine whether such an error can manifest itself in practice, it is necessary to establish whether any of the paths to which the error applies are executable -which involves their semantic analysis. Error reports on conditional errors begin with three question marks (???).

8.7.1           Data-flow errors (References to undefined variables)

!!!             Flow Error    :20: Expression contains reference(s) to variable XXX which has an undefined value.

The expression may be that in an assignment or return statement, an actual parameter, or a condition occurring in an if or case statement, an iteration scheme or exit statement.  NOTE:  the presence of random and possibly invalid values introduced by data flow errors invalidates proof of exception freedom for the subprogram body which contains them.  All unconditional data flow errors must be eliminated before attempting exception freedom proofs.  See the manual "SPARK Proof Manual" for full details.

!!!             Flow Error    :23: Statement contains reference(s) to variable XXX which has an undefined value.

The statement here is a procedure call or an assignment to an array element, and the variable XXX may appear in an actual parameter, whose value is imported when the procedure is executed. If the variable XXX does not occur in the actual parameter list, it is an imported global variable of the procedure (named in its global definition). NOTE:  the presence of random and possibly invalid values introduced by data flow errors invalidates proof of exception freedom for the subprogram body which contains them.  All unconditional data flow errors must be eliminated before attempting exception freedom proofs.  See the manual "SPARK Proof Manual" for full details.

???         Flow Error    :501: Expression contains reference(s) to variable XXX, which which may have an undefined value.

The expression may be that in an assignment or return statement, an actual parameter, or a condition occurring in an if or case statement, an iteration scheme or exit statement.  The Examiner has identified at least one syntactic path to this point where the variable has NOT been given a value.  Conditional data flow errors are extremely serious and must be carefully investigated. NOTE:  the presence of random and possibly invalid values introduced by data flow errors invalidates proof of exception freedom for the subprogram body which contains them.  All reports of data flow errors must be eliminated or shown to be associated with semantically infeasible paths before attempting exception freedom proofs.  See the manual "SPARK Proof Manual" for full details.

???         Flow Error    :504: Statement contains reference(s) to variable XXX, which which may have an undefined value.

The statement here is a procedure call, and the variable XXX may appear in an actual parameter, whose value is imported when the procedure is executed. If the variable XXX does not occur in the actual parameter list, it is an imported global variable of the procedure (named in its global definition). The Examiner has identified at least one syntactic path to this point where the variable has NOT been given a value.  Conditional data flow errors are extremely serious and must be carefully investigated. NOTE:  the presence of random and possibly invalid values introduced by data flow errors invalidates proof of exception freedom for the subprogram body which contains them.  All reports of data flow errors must be eliminated or shown to be associated with semantically infeasible paths before attempting exception freedom proofs.  See the manual "SPARK Proof Manual" for full details.

8.7.2           Data-flow anomalies and ineffective statements

!!!             Flow Error    :10: Ineffective statement.

Execution of this statement cannot affect the final value of any exported variable of the subprogram in which it occurs. The cause may be a data-flow anomaly (i.e. the statement could be an assignment to a variable, which is always updated again before it is read. However, statements may be ineffective for other reasons - see Section 4.1 of Appendix A.

!!!             Flow Error    :10: Assignment to XXX is ineffective.

This message always relates to a procedure call or an assignment to a record. The variable XXX may be an actual parameter corresponding to a formal one that is exported; otherwise XXX is an exported global variable of the procedure. The message indicates that the updating of XXX, as a result of the procedure call, has no effect on any final values of exported variables of the calling subprogram. Where the ineffective assignment is expected (e.g. calling a supplied procedure that returns more parameters than are needed for the immediate purpose), it can be a useful convention to choose a distinctive name, such as "Unused" for the actual parameter concerned.  The message "Assignment to Unused is ineffective" is then self-documenting.

!!!             Flow Error    :53: The package initialization of XXX is ineffective.

Here XXX is an own variable of a package, initialized in the package initialization. The message states that XXX is updated elsewhere, before being read.

!!!             Flow Error    :54: The initialization at declaration of XXX is ineffective.

Issued if the value assigned to a variable at declaration cannot affect the final value of any exported variable of the subprogram in which it occurs because, for example, it is overwritten before it is used.

8.7.3           Invariant conditions and stable exit conditions

!!!             Flow Error    :22: Value of expression is invariant.

The expression is either a case expression or a condition (Boolean-valued expression) associated with an if-statement, not contained in a loop statement. The message indicates that the expression takes the same value whenever it is evaluated, in all program executions.  Note that if the expression depends on values obtained by a call to another other subprogram then a possible source for its invariance might be an incorrect annotation on the called subprogram.

!!!             Flow Error    :40: Exit condition is stable, of index 0.

!!!             Flow Error    :40: Exit condition is stable, of index 1.

!!!             Flow Error    :40: Exit condition is stable, of index greater than 1.

In these cases the (loop) exit condition occurs in an iteration scheme, an exit statement, or an if-statement whose (unique) sequence of statements ends with an unconditional exit statement - see the SPARK Definition. The concept of loop stability is explained in Section 4.4 of Appendix A. A loop exit condition which is stable of index 0 takes the same value at every iteration around the loop, and with a stability index of 1, it always takes the same value after the first iteration. Stability with indices greater than 0 does not necessarily indicate a program error, but the conditions for loop termination require careful consideration.

!!!             Flow Error    :41: Expression is stable, of index 0.

!!!             Flow Error    :41: Expression is stable, of index 1.

!!!             Flow Error    :41: Expression is stable, of index greater than 1.

The expression, occurring within a loop, is either a case expression or a condition (Boolean-valued expression) associated with an if-statement, whose value determines the path taken through the body of the loop, but does not (directly) cause loop termination. Information flow analysis shows that the expression does not vary as the loop is executed, so the same branch of the case or if statement will be taken on every loop iteration. An Index of 0 means that the expression is immediately stable, 1 means it becomes stable after the first pass through the loop and so on. The stability index is given with reference to the loop most closely-containing the expression.  Stable conditionals are not necessarily an error but do require careful evaluation; they can often be removed by lifting them outside the loop.

8.7.4           Discrepancies between specified dependency relations and executable code

The dependency relation provided with a subprogram specification (explicitly in the case of a procedure, implicitly for a function), effectively names the subprogram imports and exports, and states which (initial values of) imports may be used in deriving (the final value of) each export. The Examiner also computes the import-export dependency relation, from the executable code, and compares the specified and actual relations. Any discrepancies are described by messages of the following kinds.

The first two messages indicates specified dependencies which are not found in the code implementation. These are unconditional errors.

!!!             Flow Error    :50: YYY is not derived from the imported value(s) of XXX.

The item before "is not derived ..." is an export or function return value and the item(s) after are imports of the subprogram.  The message indicates that a dependency, stated in the dependency relation (derives annotation) or implied by the function signature is not present in the code. The absence of a stated dependency is always an error in either code or annotation.

!!!             Flow Error    :50: The imported value of XXX is not used in the derivation of YYY.

The variable XXX, which appears in the dependency relation of a procedure subprogram, as an import from which the export YYY is derived, is not used in the code for that purpose. YYY may be a function return value. This version of the message has been retained for backward compatibility.

The following kinds of messages are used to report dependencies found in the code of a subprogram, which do not appear in its specified dependency relation. These are conditional errors only: the dependencies found in the code may be attributable to paths that are not executable.

???         Flow Error    :601: YYY may be derived from the imported value(s) of XXX.

Here the item on the left of "may be derived from ..." is an exported variable and the item(s) on the right are imports of a procedure subprogram. The message reports a possible dependency, found in the code, which does not appear in the specified dependency relation (derives annotation). The discrepancy could be caused by an error in the subprogram code which implements an unintended dependency.  It could also be in an error in the subprogram derives annotation which omits a necessary and intended dependency.  Finally, the Examiner may be reporting a false coupling between two items resulting from a non-executable code path or the sharing of disjoint parts of structured or abstract data (e.g one variable writing to one element of an array and another variable reading back a different element). Unexpected dependencies should be investigated carefully and only accepted without modification of either code or annotation if it is certain they are of "false coupling" kind.

???         Flow Error    :601: The imported value of XXX may be used in the derivation of YYY.

Here first item is an import and the second is an export of a procedure subprogram. The message reports a possible dependency, found in the code, which does not appear in the specified dependency relation. This version of the message has been retained for backward compatibility.

???         Flow Error    :602: The undefined initial value of XXX may be used in the derivation of YYY.

Here XXX is a non-imported variable, and YYY is an export, of a procedure subprogram.

There are two formats for messages 50 and 601. In the first format presented here if there is more than one variable XXX which affects YYY, and does not conform to the specified dependency relation, they will appear on the same error message. Setting the switch -original_flow_errors brings up a separate message for each variable in the second (original) format.

8.7.5           Violation of restriction on imported-only variables

!!!             Flow Error    :34: The imported, non-exported variable XXX may be redefined.

The updating of imported-only variables is forbidden under all circumstances.

8.7.6           Violation of safety or security policy

See section 8.6.2 for more information on policy checking.

!!!      Flow Error          :57: Information flow from XXX to YYY violates the selected information flow policy.

8.7.7           Supplementary error messages

The following supplementary messages are issued, to assist error diagnosis. The meanings of these messages are evident.

!!!             Flow Error    :30: The variable XXX is imported but neither referenced nor exported.

!!!             Flow Error    :31: The variable XXX is exported but not (internally) defined.

!!!             Flow Error    :32: The variable XXX is neither imported nor defined.

!!!             Flow Error    :33: The variable XXX is neither referenced nor exported.

!!!             Flow Error    :35: Importation of the initial value of variable XXX is ineffective.

The meaning of this message is explained in Section 4.2 of Appendix A.

8.7.8           Inconsistencies between abstract and refined dependency relations

!!!             Flow Error    :1: The previously stated updating of XXX has been omitted.

XXX occurred as an export in the earlier dependency relation but neither XXX nor any refinement constituent of it occurs in the refined dependency relation.

!!!             Flow Error    :2: The updating of XXX has not been previously stated.

A refinement constituent of XXX occurs as an export in the refined dependency relation but XXX does not occur as an export in the earlier dependency relation.

!!!             Flow Error    :3: The previously stated dependency of the exported value of XXX on the imported value of YYY has been omitted.

The dependency of the exported value of XXX on the imported value of YYY occurs in the earlier dependency relation but in the refined dependency relation, no constituents of XXX depend on any constituents of YYY.

!!!             Flow Error    :4: The dependency of the exported value of XXX on the imported value of YYY has not been previously stated.

A refined dependency relation states a dependency of XXX or a constituent of XXX on YYY or a constituent of YYY, but in the earlier relation, no dependency of XXX on YYY is stated.

!!!             Flow Error    :5: The (possibly implicit) dependency of the exported value of XXX on its imported value has not been previously stated.

Either a dependency of a constituent of XXX on at least one constituent of XXX occurs in the refined dependency relation, or not all the constituents of XXX occur as exports in the refined dependency relation. However, the dependency of XXX on itself does not occur in the earlier dependency relation.

9                 Controlling the display of warnings and flow errors

The Examiner has two complementary mechanisms for controlling how certain warning messages are displayed. 

1         The accept annotation can be used, in a source file, to mark particular semantic warning (see section 6.3) or flow error messages (see section 8) as “expected” or “justified”.

2         The warning control file can be used to summarise entire groups of semantic warnings (those described in sections 6.3 and 6.4) for an entire Examiner invocation without the need to alter source code.

These two diverse mechanisms are described in the following sections.

9.1         The accept annotation

9.1.1       Overview

SPARK users are encouraged to investigate and, as far as possible, eliminate the cause of all semantic warning and flow error messages; however, it is possible that some messages may remain which do not indicate a deficiency in the program.  The accept annotation is provided to allow users to mark these messages as being expected and to give some textual justification for them.

Common examples of messages that might be justifiable in this way include:

·          ineffective assignments in a procedure call statement where one returned parameter is not needed and not used at the point of call;

·          false coupling of two variables through shared use of a large abstract data structure: e.g. adding an address to a database followed by looking up a name from it would appear to make the name dependent on the address when that is probably not the case; and

·          stable expressions in loops where the cost of repeatedly evaluating the conditional expression is too low to warrant restructuring the code to hoist it out of the loop.

Where messages are marked using the accept annotation, the Examiner suppresses the generation of the associated message.  Instead, a summary of the justifications made is printed at the bottom of the listing file and after each file’s results in the report file.


9.1.2       Form of the accept annotation

9.1.2.1                Syntax

message_kind = flow_message* | warning_message*

* or any unique abbreviation such as “Flow” or even “F”.  Case is ignored and the last character of an abbreviation must not be the underbar (“_”). Flow_Message is used to justify messages described in section 8.4, Warning_Message is used to justify messages described in section 6.3)

justification_string = string_literal | string_literal , justification_string

justification_name = null | dotted_simple_name

justification_clause =message_kind, integer_literal,
[justification_name [, justification_name]], justification_string

justification_statement = - -# accept  justification_clause { & justification_clause};

end_justify = - -# end accept;

9.1.2.2                Examples

-- Ineffective assignment to an actual parameter

--# accept Flow_Message, 10, Unused, “X co-ordinate not needed here”;

ReadCartesianPair (X => Unused, Y => Height);

--# end accept;

 

-- Semantic warning about validity of an external variable

--# accept W, 392, InputPort, "all bit patterns are valid for this type";

X := InputPort;

--# end accept;

 

-- Ineffective statement (note that this example does not contain a  variable name)

--# accept Flow, 10, “Statement has an effect”,

--#                  “outside SPARK boundary”;

if FatalError then

  HaltProgram;

end if;

--# end accept;

 


-- Information flow errors (discrepancies between derives annotation and code)

...

--# accept F, 601, Current.Name*, Address*, “Coupling via database” &

--#        F, 601, Current.Name*, JobTitle*, “Coupling via database”;

end AddEmployeeDetails;

-- Information flow errors that would appear here are controlled by the above accept statement

* for information flow messages containing two variable names, the export is always placed first, followed by the import.

 

It should be noted that two variable names should only be present on one line when they are referenced by a single error message. Variables referenced by different error messages should be placed on separate lines even if each raises the same error number.

The reserved word null may be used to specify the special null export in the justification of an information flow error. For example:

 

--# accept F, 57, null, TS, “Flow from TS to null OK in this case”;

9.1.3       Positioning and scope

A justification_statement may appear anywhere in the declarative part or sequence of statements of a program unit.  It is in force, and will suppress any message(s) that it pattern matches, from the line at which it appears until the end of the unit in which it appears or until an end_justify statement is encountered.  The use of end_justify statement to limit the scope of each justification_statement is recommended.  Note, however, that messages that appear after the end statement of a unit are associated with the last line of the unit.  To control these messages an accept annotation must be in force for a range of lines that includes the last line of the unit.  In this case, an end_justify is not appropriate; see the last example in the previous section.

In function bodies, a final justification_statement must precede the final return_statement in line with SPARK’s normal rules. For example:

   -- Imported parameter Y not used in computation of function return value

   ...

   --# accept F, 30, Y, “Y not used at present”;

   return X + 1;

end F;

Accept annotations do not have any effect on locally nested units.  For example:

procedure Outer

--# derives ;

is

   -- an accept annotation here is in effect from:

   -- here

 

 

   -- to here

   procedure Inner -- but not from here

   --# derives ;

   is

   begin

      null;

   end Inner;      -- to here

begin

   -- and then is effective again from here

   Inner;

end Outer; -- to here

9.1.4       Reporting of justified messages

Where messages have been justified using the accept mechanism, this is reported in 3 places:

1         As a simple total count in the Examiner screen output.

2         At the foot of any listing file associated with the file being examined.

3         In the spark report file, after each file’s analysis results.

The listing and report file output is controlled by the “justification_option” command line switch (see section 0).

If the “Ignore” option is selected then any justification_statements are checked but have no effect on the generation of Examiner messages; this option is useful for audit purposes.

 If the “Brief” option is selected then the report an listing file will show simple counts of matches thus:

18 message(s) marked as expected.

If the default “Full” option is selected then a tabular summary of justifications is produced thus:

Expected messages marked with the accept annotation

Type Msg    Lines              Reason                    Match

     No.  From    To                                    No.  Line

Flow  23  1102  1104  First write to array                1  1103

Flow 602  1113   end  Consequence of first write to ar    1  1114

Flow  41  4666  4673  Mode-specific code                  1  4667

Flow  41  4726  4730  Mode-specific code                  1  4727

Flow  41  4741  4743  Mode-specific code                  1  4742

Flow  10  4791  4794  Returned parameter not needed in    1  4792

Flow  41  4797  4801  Mode-specific code                  1  4798

The table lists the kind of message, the message number, the range of lines it is active for (“end” here means “end of the unit in which the accept annotation appears”, the first part of the explanation string, the number of times the annotation successfully pattern matched and the last line where a match occurred.

Where an accept annotation fails to match a message then semantic warning is raise thus:

  17        --# accept Warning_Message, 393, Inputs, "all bit ... type";

                       ^3

--- (  3)  Warning           :121: No warning message matches this accept

           annotation.

The failure to match is also noted in the summary or tabular output in the listing and report file.

A warning is also raised for any end_justify statement that doesn’t have a matching justification_statement.

9.2         The warning control file

The warning control file, which is a text file whose default extension is “.wrn”, provides a flexible means of selecting how semantic warnings and notes will be displayed. By default they are indicated in exactly the same way as errors with a message and location pointer appearing in both report and listing file. Entries in the warning control file allow individual categories of warnings, including pragmas selected by name, to be reported instead in a summary form where just the number of each found is appended to the report and listing file. The mechanism allows, for example, innocuous pragmas such as Page to be ignored while continuing to flag more serious ones.

Comments can be included in the warning control file using the normal Ada syntax.

9.2.1       File format

Warning-control-file = { Warning-entry }

Warning-entry = Keyword | Pragma-selection

Pragma-selection = “pragma all” | “pragma” pragma-identifier

The keywords to be used to suppress the individual warnings are given with the corresponding warning messages in sections 6.3 and 6.4.

Shorter unique representations of these options are recognised. Each language feature placed in the warning control file selects that feature for summary rather than full reporting. “Pragma all” selects all pragmas (other than those such as pragma Import which are recognised by the Examiner) for summary reporting; alternatively a number of pragmas can be selected individually by name.

9.2.2       Example of a warning control file

A warning control file with the following contents, selected using the warning_file option would cause warning messages connected with direct updating of package own variables, and with the pragmas Page and List, to be summarised; all other warnings would be reported in full.

direct

pragma list

pragma page

10                 Verification condition generation

10.1          General description

VC generation is the generation of formulae whose proof demonstrates some property of the associated code. Certain additional errors can be detected during VC generation and these are described below.

These messages may also appear during DPC generation, since the same underlying technology is used to generate DPCs.

10.2          Error messages

The following warning may be issued on-screen at the end of an Examiner analysis:

                Warning - VC Generation requested but no bodies presented. No VCs generated.

This warning is issued when the VC generator is active (i.e. any of the –rtc, -exp, or –vcg switches have been given), but no bodies that actually generate any VCs have been presented to the Examiner. This prevents the common error of presenting only package specifications to the Examiner, resulting in no VCs, or (perhaps worse) leaving previously generated VCs in place.

The following warnings and error messages may be issued during VC generation:

---             Warning                          :406: VC Generator unable to create output files. Permission is required to create directories and files in the output directory.

This message is echoed to the screen if the Examiner is unable to create output files for the VCs being generated (for instance, if the user does not have write permission for the current directory).

---             Warning                          :414: Long output file name has been truncated.

Echoed to the screen if an output file name is longer than the limit imposed by the operating system and has been truncated. Section 4.7 of the Examiner User Manual describes how the output file names are constructed. If this message is seen there is a possibility that the output from two or more subprograms will be written to the same file name, if they have a sufficiently large number of characters in common.

---             Warning                          :495: The VC file NAME has a pathname longer than 255 characters which can produce unexpected problems on Windows with respect to the SPARK tools (undischarged VCs) and other tools.

There is little that can be done to work around this as this is a fundamental limitation of Windows. You could try one of the following: Perform analysis higher up in the directory tree (i.e. in C:\a instead of C:\project_name\spark\analysis). You could try remapping a directory to a new drive to do the same (google for subst). You could try renaming or restructuring your program to flatten the structure a bit. And finally you can perform analysis on a UNIX system such as Mac OSX or GNU/Linux as they do not suffer from this problem.

***        Semantic Error              :962: Error(s) detected by VC Generator. See the .vcg file for more information.

This message is echoed to the screen if an unrecoverable error occurs which makes the generation of VCs for the current subprogram impossible. Another message more precisely identifying the problem will be placed in the .VCG file.

In the final case, the following messages may also appear in the VCG file:

!!!             Program has a cyclic path without an assertion.

SPARK generates VCs for paths between cutpoints in the code; these must be chosen by the developer in such a way that every loop traverses at least one cutpoint. If the Examiner detects a loop which is not broken by a cutpoint, it cannot generate verification conditions for the subprogram in which the loop is located, and instead, issues this warning. This can only be corrected by formulating a suitable loop-invariant assertion for the loop and including it as an assertion in the SPARK text at the appropriate point.

!!!             Unexpected node kind in main tree.

This message indicates corruption of the syntax tree being processed by the VC Generator. It should not be seen in normal operation. Please contact Altran Praxis if it is produced.

!!!             ‘forall’ structure in VC is corrupt.

This message indicates corruption of the syntax tree being processed by the VC Generator. It should not be seen in normal operation. Please contact Altran Praxis if it is produced.

11                 Analysing automatically generated code

11.1          The KCG language profile

The Examiner has a special language profile for the Esterel KCG code generator invoked using the command line switch –language=kcg.  The KCG profile is the same as the 2005 language profile but extra features specific to automatically generated code are enabled.

The KCG language profile should only be used with the KCG generator as it allows the Examination of automatically generated code which is suboptimal in terms of flow analysis and not as would normally be expected in hand written code.  The deficiency in flow analysis is mitigated by extra proof obligations and so when the KCG language profile is selected the –vcg switch should also be specified and the resulting VCs proved.

Note: the generation of VCs for proof of definedness is not yet fully implemented and so the SPARK Pro tools cannot guarantee absence of conditional flow errors at present when –language=kcg is selected. This will be addressed in future releases of the SPARK Pro tools.

When the KCG profile is selected the accept annotation is extended so that the keyword all can be used in place of specific variable names.  Additionally an accept statement containing the keyword all placed in the declarative part of a compilation unit (e.g., a package body) applies to all units nested within the compilation unit.  It does not apply to subunits or child packages of the compilation unit which require their own extended accept annotation.

A warning will not be issued if an extended accept annotation containing the keyword all is not matched by a corresponding error or warning.

As an example:

package body P is

   --# accept F, 501, all, "KCG";  -- Allowed when -language=kcg

   --# accept F, 602, all, "KCG";  -- otherwise semantic error 175

   …

end P;

 The above accept statements will apply to all statements of subprogram bodies within P.

If the Examiner –vcg switch is not selected for an analysis with –language=kcg then warning 425 is raised for each subprogram analysed.

11.2          KCG language profile and parent access to its public child

The scheme by which the SCADE KCG generates packages requires a parent package to be able to access the visible part of its own public child.  Normally a parent can only access the visible part of its private children in SPARK.  The restriction in SPARK is partly philosophical, a public child is viewed as a means of extending an existing package, but it also prevents mutually recursive subprogram calls between parent and child.

The KCG code generator does not generate recursive subprogram calls and so it has been deemed acceptable to allow parents to access their own public children when the KCG language profile is selected.  In any case allowing parent to public child access does not invalidate any of the flow-analyses or verification conditions generated by the Examiner even if recursion is present.  The restriction on the use of recursion is to satisfy the SPARK goal of having subprograms that are bounded in memory use and execution time.

In order for a parent to access its own public child it must be listed as a component of the parent in the index file as is the case for a private child (see section 4.2.6).

If a package references its own public child in a with clause when –language=kcg then the warning 426 will be raised.

The own variable of a public child cannot be a refinement constituent.  If this is attempted then semantic error 146 will be raised.

12                 Static limits and associated error messages

The Examiner contains a number of data structures of fixed sizes.  If any of these fixed limits is exceeded the Examiner will stop, displaying the message "Internal static tool limit reached" on the terminal.  Preceding this will be the cause, for example "Syntax tree overflow".  The screen will also echo the consumption of the principal tables in these circumstances.  In this case please contact Altran Praxis for advice.

A similar form of message will be seen if the tool reaches an operating system limit such as the number of files that can be open at any time.  In this case the message will be prefixed by "Operating system limit reached".  Such limits must be attended to by the system manager of the system on which the Examiner is being operated.  Altran Praxis may be able to assist in identifying the parameter that requires alteration.

Fatal errors prefixed by "Unexpected internal error" indicate an internal anomaly in the Examiner; in the unlikely event that errors of this form occur, please contact Altran Praxis.

 

A                      Appendix: Information-Flow and Data-Flow Analysis of while-Programs

Appendix A is provided as a separate document. Its filename is Examiner_UM_Appendix_A_While_Programs.pdf.

 

Document Control and References

Altran Praxis Limited, 20 Manvers Street, Bath BA1 1PX, UK.

Copyright Altran Praxis Limited 2011. All rights reserved.

File under

SVN: trunk/userdocs/Examiner_UM.doc (was S.P0468.73.70)

Changes history

Issue 0.1  (12th January 2000) First Draft created from v4.0 manual.

Issue 0.2  (13th June 2000) Second draft, believed complete.

Issue 1.0  (16th June 2000) Definitive, after formal review

Issue 1.1  (14th Feb 2001) Added cfrs: 810, 812, 824, 831, 833, 838, 852.  Documented RealRTC and OptFlow

Issue 1.2  (28th March 2001) Added documentation of Version switch (CFR 879).

Issue 1.3  (13th July 2001) Start of updating for Release 6.0

Issue 1.4  (24th August 2001) Adds errors 724, 730, 750–754, 800–805.

Issue 1.5  (24th August 2001) Adds ability to specify Long_Integer attributes in target data file.

Issue 1.6  (31st August 2001) Correct Examiner, Simplifier, and Checker context diagram.

Issue 2.0  (31st October 2001) After review S.P0468.79.74

Issue 2.1  (7th January 2002) Added documentation of target configuration file (CFR 992)

Issue 2.11 (5th February 2002) Added target configuration file to Figure 1 (CFR 992)

Issue 2.12 (19th March 2002) Added /Help command line option

Issue 2.13 (15th May 2002) Modular subtypes are allowed, do remove error 802.

Issue 3.0 (3rd July 2002) Updated, after review, for Release 6.1.

Issue 3.1  (30th September 2002) Updated semantic error message list and minor details to match 6.2 Examiner, changed document title to 6.2 Examiner.

Issue 3.2  (19th November 2002) Update title for Examiner 6.3.  No other changes.

Issue 4.0  (15th April 2003): Updated to new template format.

Issue 5.0  (5th June 2003):  Changes to new template, final format.

Issue 5.1 (17th February 2004): Add /brief command line switch.

Issue 5.2 (17th May 2004): Allow record subtypes.

Issue 5.3 (24th August 2004): Update documentation of html switch.

Issue 5.4 (2nd November 2004): Add /rules command line switch and changed company name.

Issue 5.5 (6th December 2004): Change to error messages 50 and 601, addition of /original_flow_errors switch.

Issue 5.6 (9th December 2004): Removed /vcg switch from options.

Issue 5.7 (17th December 2004): Regenerated error messages.

Issue 5.8 (4th January 2005):  Definitive issue following review S.P0468.79.88

Issue 5.9 (5th July 2005):  New warning 9 for hidden exception handlers.

Issue 6.0 (10th August 2005): Include warning control keyword for semantic notes 3 and 4.

Issue 6.1 (14th November 2005): Updated following changes to /warn /nowarn and /nolisting options.

Issue 7.3 (12th January 2006): Updated for Examiner release 7.3.

Issue 7.31 (13th April 2006): Updated for Examiner release 7.31.

Issue 7.4 (20th December 2006): Updated for Examiner release 7.4.

Issue 7.5 (16th May 2007): Updated for Examiner release 7.5.

Issue 7.6 (14th July 2008): Definitive issue for Examiner release 7.6 following review S.P0468.79.93.

Issue 7.6.2 (20th August 2008): Update for Examiner 7.6.2. “real” is now an FDL reserved word.

Issue 7.6.3 (2nd February 2009): Modify copyright notice.

Issue 8.0.0 (6th February 2009): Update wording of warning 408, and create Issue 8.0.0.

Issue 8.1.0 (18th March 2009): Update for toolset release 8.1.0.

Issue 8.1.1 (27th April 2009): Definitive issue for release 8.1.1 following review 7.148.

Issue 8.1.3 (21st July 2009): Update for toolset release 8.1.3.

Issue 8.1.4 (18th September 2009): Update for toolset release 8.1.4.

Issue 9.0.0 (1st March 2010): Update for toolset release 9.0.0.

Issue 9.1 (16th November 2010): Update for toolset release 9.1.0.

Issue 9.1.1 (24th May 2011): Update for toolset release 9.1.1.

Issue 10.0 (3rd June 2011): Update for toolset release 10.0.0.

Issue 10.1 (15th December 2011): Update for toolset release 10.1.

Changes forecast

None.

Document references

1.       SPARK – The SPADE Ada Kernel (including RavenSPARK)

2.       SPARK 83 – The SPADE Ada 83 Kernel

3.       SPARK Proof Manual

4.       ZombieScope User Manual

5.       SPARK Library User Manual



[1] Forward slash is used as a directory separator throughout this manual, but backslash may also be used on Windows.

[2] The same analysis is also applied to tasks and entries but the term subprogram is used throughout this section for brevity.