What is the scientific component of computer science

Computer science - an independent science?

Summary

In this contribution I ask for a framework for a comprehensive theory of computer science as a formal theory of discrete dynamic systems, based on the model of theory formation in the natural sciences. With numerous examples, I show that this undertaking is quite promising and that it is already in isolated parts. In the long term, computer science could develop an independent science, in addition to its strong technological aspects, with its own theoretical, mathematical basis, and on par with the natural sciences.

introduction

background

Computer science is often told as a success story, driven by Moore’s Law, according to which information technology has become exponentially cheaper, faster and smaller over the past 60 years [8]. Accordingly, information technology was used for more and more new tasks. Systematic errors in the behavior of devices, failed projects, uncontrollable behavior of computer-integrated systems etc. are accepted as supposedly unavoidable side effects and justified as the price that the rapid development demands.

Whether it could have turned out differently or better - or perhaps should - is hardly discussed and hardly considered possible. Even the question of an alternative development meets with displeasure, especially among younger scientists, who have so eagerly learned what is already there and who now want to build on this supposedly solid and irrefutable basis, squeezed into support and reward systems that encourage them to deal with fundamental questions hardly support IT. In this situation it seems interesting to me to ask some fundamental questions and to discuss aspects that could bring more systematics into the diverging sub-areas of computer science and which, as the basis of a science of computer science, could make future developments easier and faster to understand and use.

Computer science as a science: historical development

In the 1950s, computer technology was used primarily for two areas: for numerical tasks in traditional engineering and for sifting through larger databases, such as those produced by a census. The two programming languages ​​FORTRAN and COBOL were tailored to this; the path from a task to an algorithmic idea to a program was generally sufficiently manageable. This changed in the 1960s: with the increasing computing power of computers, tasks that required more complex algorithmic ideas and more extensive programs appeared to be solvable. However, programs became more and more error-prone and opaque, and finally in 1968 at a famous conference [30] in Garmisch-Partenkirchen, a “software crisis” was established, which was to be countered with “software engineering”. This led to new programming languages ​​(PL / 1, Pascal, ADA), new programming paradigms (structured programming, object orientation) and later to new software architectures (components, service orientation, microservices). In addition, software development methods such as systematic refinement and the specification of interfaces were propagated.

In retrospect, it is obvious that the Garmisch-Partenkirchen conference raised important questions and initiated targeted answers. But all of the suggestions there ultimately make the long way from a task to an algorithmic idea to a program a little more convenient only in the last section.

The way of thinking of “Computer Science” as a science from the beginning to the most recent times can be seen like in a magnifying glass at the “Memorial Lecture” for the famous computer scientist E.W. Studying Dijkstra from 2010, where the also famous computer scientist Tony Hoare formulated four criteria for science and specialized in the subjects of computer science [21]:

  • Description: Science describes properties and behavior of systems; In computer science, such a description can serve as a specification, in particular as a precise interface between customer and manufacturer. As a description method, Hoare suggests logic-based calculi; as an example weakest preconditions.

  • Analysis: The central objects and their conceptual context are explained. Computer science deals with programs. They are analyzed with pre- / postconditions, for example Microsoft contracts.

  • Explanation: Reasons for the behavior of systems are sought here. In computer science, the semantics of programming languages ​​provide such justifications.

  • Prediction: Good science can predict events. In computer science, this means predicting the correct behavior of a program, i.e. verifying it.

A program is understood as a mathematical object, its properties should be formulated in the language of logic, its meaning should be formally formulated and its correctness should be formally proven. These are undoubtedly scientific concepts. In this sense, Donald Knuth in [18] also regards programming as a science.

This post

In 14 sections we formulate some thoughts on a comprehensive theory of computer science as a formal scientific theory of dynamic systems. The subject of this science should, however, be understood much broader than that described above by Dijkstra, Hoare, Gries and Knuth. Nevertheless, this theory should be formulated and not merely supplement the existing formal concepts with sociological, thus informal, concepts. Examples of formal concepts of information processing that are not intended for implementation can be found in business informatics and in embedded systems.

In this article, concepts are explained and questions raised, some of which have received little attention in the literature up to now. However, some of the concepts presented are not really new; they have been discussed many times in isolation. Perhaps their composition is new, combined with the claim, suitably combined and supplemented by further concepts, to contribute to a comprehensive theory of computer science.

The ideas presented here relate to one another; how exactly they are related remains open. Perhaps some of these ideas will later prove to be of little use - and a number of other fundamental ideas are probably still missing for a complete theory of computer science.

In this post I just want to show that it is worth looking for a more comprehensive theory of computer science.

1. Successful modeling in science

Good science forms theories by doing Models developed. A model is a system of terms and relationships between the terms to better understand reality. "Reality" is either in nature or - as in the case of computer science - is constructed in parts by people. A particularly impressive example is physics: For centuries, physicists have been looking for a unifying model for all branches of physics and thus developed an impressive building of scientific theorization. In particular, there was an astonishing harmony between physics and mathematics [26] with very abstract but extremely useful scientific concepts and models. A typical example is the concept of energy. With it, very different phenomena can be related to one another and quantified, for example when a stationary vehicle accelerates and is driven against a wall: In this process, a certain amount of energy occurs, first in the gasoline, then in the acceleration and after all, is in the forming of sheet metal. The usefulness of the term energy lies in its quantifiable invariance for dynamic processes. Such invariants, often referred to as “laws of nature”, are the framework of science. In recent years, for example, "systems biology" has also been looking for such terms and invariants for a better understanding of metabolic processes of all kinds.

Computer science should learn from physics and other sciences to form a theory for their entirety. As early as 1963 in [27] John McCarthy suggested the development of a mathematics-based “Science of Computation” in which the important properties and objects of this science are derived from a few basic assumptions.

2. Modeling in computer science

Since models make up the core of a scientific theory, the question of models in computer science arises. As is often the case in the natural sciences, computer science is also about description more dynamic Properties, but with one general and fundamental difference: physics generally describes behavior continuously, with functions over a real time axis. With it you can create wonderful differential equations and integrals and much more. Computer science describes behavior in discrete steps; This means that completely different properties must and can be described and analyzed using completely different mathematical analysis techniques.

Models are used in computer science for various purposes:

  • to describe the facts of a domain, for example the structure of the accounting of a company. The extent to which this description is "correct" inevitably remains informal,

  • to describe algorithmic behavior, for example applying for insurance benefits,

  • to describe the performance of software, either in terms of its properties or its operational behavior.

A model in computer science is always a symbolic description. A model can be at the level of the symbolic description executable and thus be simulated on a computer. However, it can also conceptually describe the behavior of a system in a specific domain without being implemented. With a really useful theory of computer science, it will be worth negotiating correctness at the model level and then systematically deriving correct software.

The seamless integration of computer technology with its organizational or technical environment is necessary for many computer-integrated systems, not least for that Internet of things: This can only be managed with formal models that include both computing technology and its environment. This includes, in particular, the modeling of technical or organizational components that should not be implemented at all.

Now there are a number of modeling techniques that support this aspect; above all the Unified Modeling Language (UML) [6] and - primarily for business informatics - Business Process Model and Notation (BPMN) [1]. These techniques suggest a variety of graphical means of expression to express specific, subtle, domain-specific issues. And both techniques (and a number of others, for example Statecharts) are definitely used for modeling. But all of these techniques offer few possibilities to demonstrate the desired properties of the modeled system in the model. So even if you model with these techniques and generate software from them, correctness is not formally argued on the model level, but on the software level.

However, there are also modeling techniques with analysis and verification concepts. These include, for example, ALLOY, B, Focus, Live Sequence Charts, Petri nets, TLA and Z, among many others. However, some of them are more concept studies for very special properties and aspects.

A modeling technique will only prevail in the long run if it offers strong analysis techniques.

3. Trustworthy models

In a general systematic structure of modeling principles in computer science, it should be possible for a modeler to use a suitable - formal - modeling technique to describe a system in a trustworthy, comprehensible and unambiguous manner. For a given example, we usually have a good understanding of what constitutes a proper description: it includes all aspects that are important to the modeler and avoids aspects that the modeler does not want to talk about. In more concrete terms, describes a trustworthy model M of a discrete system S:

  • every elementary object of S as an elementary object of M,

  • every elementary operation of S as an elementary operation of M,

  • every composed object of S as a correspondingly composed object of M,

  • every composed operation of S as a correspondingly composed operation of M,

  • every - local - state of S as the state of M,

  • every - local - step of S as a step of M.

In summary: Elementary and compound objects and operations as well as states and steps of S and M should correspond bijectively. Intuitive and formally presented behavior should therefore be linked as closely as possible.

The question now arises of a modeling technique that meets these requirements, at least for a sufficiently large class of systems. It is obvious that such objects, operations, states and steps in the classical sense are not always calculable functions. This problem has been raised more or less clearly. Even in his fundamental work The Art of Computer Programming [22] brings Donald Knuth with him computational methods (these days often as Transition system denotes) a more general term of the algorithm a: A computational method consists of a set S of states and a next-state function F on S, of which it is called "F might involve operations that mortal man can't always perform." [22, p. 8]. Knuth calls a computational method effectivelyif F is a computable function. In his EATCS award lecture [29], Robin Milner explains quite self-critically: “...we should have achieved a mathematical model of computation, perhaps highly abstract in contrast with the concrete nature of paper and register machines, but such that programming languages ​​are merely executable fragments of the theory…. " It remains to be seen what kind of unworkable fragments Milner means.

An attempt is often made to generalize the classic calculation concept of variables, effective objects and operations using character strings and programs from value assignment with sequence, alternative and conditional loops to freely selected mathematical objects and operations, for example in [37]. Shepherdson in [33] developed a similar idea on the basis of Turing machines. In a slightly different style and from a purely logical perspective, Abstract State Machines [19] consider programs over a signature (a set of symbols of different sorts); the user of the formalism can then freely choose a signature and his desired structure over this signature and talk about the structure with terms.

So a good model uses symbols that are directly interpreted as objects or operations in the modeled world. This is what distinguishes models from programs: A programming language fixes the interpretation of the symbols in a program.

4. Invariants in Computer Science

Invariance is a cornerstone of scientific theory building (Section 1); This raises the question of invariance concepts in computer science. The most well-known such term is undoubtedly Hoare's invariant calculus [20] with his concept of loop invariants for classical programs [16] denotes loop invariance as one of the fundamental ideas in software construction. This may be the case if the correctness of a system design is established at the very end, when the program is being created in a classic programming language. A theory of computer science should, however Correctness of models consider. In fact, many modeling techniques work with special invariants: [36] suggests, for example, invariants for models of distributed processes, distributed algorithms, and communication protocols. The invariant calculus for Petri nets is particularly expressive (because transitions are reversible) [31].

Each such invariant describes a relationship between system variables of a system that remains invariant in all achievable system states. These terms of invariance remain very close to the given system model. As the example of the concept of energy (Sect. 1) shows, physics knows much deeper, less obvious and yet (or therefore) extremely useful invariants. Computer science should also strive for comparably deep-seated invariance terms. Such invariants could perhaps determine exactly what remains invariant if, for example, with the aid of a computer:

  • a travel agency books a trip,

  • a bank customer withdraws cash from an ATM,

  • a life insurance company converts an existing contract to new hardware,

  • a car insurance settles a claim,

  • a data center calculates tomorrow's weather,

  • an operating system clears the memory,

  • a compiler translates a program.

At the moment this is all speculation; However, it is worthwhile to search systematically for modeling techniques that, compared to previous concepts, use far more abstract and less obvious, but nevertheless intuitively understandable terms and invariants.

5. A fundamental concept (of "information")

As the example of the energy concept shows, invariance can be based on a very abstract, yet intuitively understandable concept. What would such a term in computer science look like? As an example I will try the concept of “information”, with the central invariant of information preservation in steps: A given state of a system contains a lot of “information”. As long as the system does not communicate with its environment, the system can convert the information in completely different ways, recombine it, calculate something, and a step can also make parts of the information inaccessible. But the information of the system remains in its entirety. A calculation, a sequence of steps, then models one strictly organized flow of information.

A constructive definition of such an information concept is not in sight. But it would have a number of extremely valuable consequences, for example the invariants described in Sect. 4 should be formally comprehensible. You should be able to precisely formulate what exactly changes and what remains the same when copying, deleting or combining information or documents. Data protection, protection of privacy and related terms could have a much more precise meaning.

Another interesting property of such an information concept is information preserving operations: Such an operation f does not lose any information; an argument a For f can therefore from the resultfa) be calculated back. Examples are the negation of propositional logic and the positive square root of positive real numbers. Such reversible functions have often been considered, primarily in the context of propositional logic and circuits [39]. The idea of ​​reversible computing could decisively advance IT security: an attacker could be traced back retrospectively.

Perhaps it is possible to characterize special information concepts through the operations that are possible or permitted with them in a given context.

6. Interactive components

Classical theoretical computer science abstracts information technology processes into calculable functions using symbol chains. With the deep-seated results on complexity theory and the connections between logic and automaton theory, this approach has established itself as the theoretical basis of computer science. The best-known operational model for this are Turing machines. A Turing machine with a memory and a processor can be seen as an abstraction of a computer from the 1960s. A multiprocessor architecture that uses several processors to increase the computing speed can plausibly be abstracted into an architecture with a single processor. A system made up of several active components communicating with one another can be simulated on a 1 ‑ processor system, but it is difficult to understand it as a 1 ‑ processor system without distorting the meaning of the system.

The first such systems were proposed in the 1980s, in particular the actor formalism by Agha and Hewitt [2]. The LINDA language [11] and the Chemical Abstract Machine (CHAM) [5]. In this metaphor, active elements are understood as a kind of molecules that swim in a - metaphorical - chemical solution and react with each other as soon as they meet and suitable reaction conditions exist. Numerous algorithms have been formulated in this context. Another example is the FOCUS formalism by Broy [10], with components that form power-processing functions. Finally, Petri nets [31] also belong in this series of modeling techniques: every transition can be understood as an elementary active unit.

The question arises of a theory that forms an appropriate abstract basis for such and many other modeling techniques, in analogy to the calculable functions for sequential input / output algorithms.

In a number of papers Wegner proposes generalizations of Turing machines for this purpose, for example in [40] and in [41]. In [13], however, the correctness of Wegner's argument is disputed.

I am not concerned with the question of “what a computer can do in principle”, but with the much broader question of how systems can be appropriately modeled and analyzed that progress in discrete steps and interact with their environment.

7. Independent steps

The concept of discrete steps is based on states: a step begins in a state and ends in a state. In the classic framework of system descriptions are states global: Each step updates the current state. A single behavior, a process, a calculation is then a sequence s0-t1-s1-t2-s2 ... of states si and steps si ‑ 1-ti-si (i = 1,2, ...). Many system models construct global states of one from components C.1, ..., Cn composed system as a Cartesian product S1 ×… × Sn of the state spaces Sk of components Ck. There are several steps of the composed system for each step of a component. Steps in different components that are independent of one another are noted in this model in a non-deterministic manner in any order.

Fig. 1 shows a small example: Let A and B be two independent components with 3 or 2 states and steps, which are arranged in cycles. Each of the two components has exactly one infinitely long process. The composition A | B of A and B is a component with 6 states. In each state, 2 steps begin; this creates an infinite number of infinitely long processes. This conception of behavior is intuitively obvious and is used by many analytical methods, in particular by methods of model checking.

But this view also has disadvantages: the two steps beginning in one state look like alternatives; in fact, however, they both occur independently of one another! Alternatively, the chronological order of the steps - measured on clocks outside the system - appears. [24] discusses details of such assumptions about time sequences and requires perfectly accurate clocks for this. To avoid such assumptions, one can take advantage of the fact that independent steps in disjoint local States begin and end. You can use it as a disordered note. A process is then not a sequence, but a Partial order of steps. Order then does not denote progress in time, but only a "before-and-after" context. In this view, the system A | B from Fig. 1 then has only one process that combines the two processes of A and B and whose order unites the orders of the two processes. With the additional requirement that B never performs more local steps than A, the system still has exactly one sequence, as shown in Fig. 2. Petri nets pursue this proposal with the concept of "distributed flow" [31].

Another disadvantage of viewing a single process in a system as a sequence of steps is illustrated by Lamport's example of an hour clock that goes from one full hour to the next in one Step passes. An hour-and-minute clock is needed for this sixty Steps, so strictly speaking it cannot be used as an hour clock. In his TLA formalism [25], Lamport therefore proposes a logic that conceptually logically equates individual steps with “any number of steps”.

Both examples show that the conceptual understanding of a single process of a system as a sequence of steps has unpleasant consequences when systems are composed or refined. It is therefore worth taking independent events with their local causes and effects seriously and distinguishing them from nondeterminism. This is especially true for large systems, which are usually systematically constructed as a composition or refinement of smaller systems. [23] suggests a specific logic for this.

8. Limited expressiveness of value assignment

Not only programming languages, but also many modeling languages ​​describe steps as assigning values ​​to variables. In many cases this is appropriate or at least acceptable, but it can also lead to unconvincing models. One example is the “Pebble Game”, which Dijkstra describes in a video in a demanding series from Stanford University [15]: An urn contains a finite number of black and white stones. For as long as possible, two stones a and b are repeatedly removed and one stone c is put back. Here, c is white if a and b are colored differently, otherwise c is black. Fig. 3 shows Dijkstra's model: a nondeterministic program of conditional value assignments with arithmetic operations on four integer variables. Fig. 4 models the game as a Petri net: PEBBLE is a constant symbol that is initialized with black and white stones. Each transition models a step in the system, consisting of two stones removed and one stone put back. The lettering of the arrows of each transition shows the color of the stones involved. In this model, “remove” and “put back” are modeled immediately. The detour via the number of black and white stones involved is avoided in the model. With an invariant, Dijkstra shows that after a finite number of steps there is exactly one stone left. It is exactly white when the number of white stones is initially odd. However, there is also a corresponding invariant for the Petri net [31].

Variables and value assignment are also not very suitable for modeling distributed systems, communication protocol, etc. An example is the TLA model of an asynchronous interface from [25]. Ultimately, a scheduler is required there, which regulates the access of several components to the commonly used variables. The components can therefore only be implemented in a really distributed manner under extensive further assumptions.

A theory of computer science will not only use value assignment when describing steps, but a variety of other, possibly more abstract concepts. An example are Petri nets: The meaning of a transition is given by the changes in the marking of the places in their immediate vicinity.

9. The metaphor of the living organism

New systems can be formed by refining and composing given systems. The question arises of further methods for the systematic construction of new systems from given. The metaphor of 3D printing and the “living organism” metaphor according to [14] are among the few suggestions for this. There several “cells” can independently form “creatures” that have completely new properties. In general, the question arises as to the limit of such constructions and the “unconstructible”, in analogy to the limit of calculability in classical arithmetic.

10. Correctness, verification

Scientific theories thrive on models that show interesting consequences. A model is only really useful if you can derive something interesting from it. The purely descriptive character of UML, BPMN, ASM and other modeling techniques without specific analysis methods crucially restricts their use. A really good model of a system is trustworthy (see Section 3) and at the same time analyzable, especially with deep invariants (see Section 4).

Many properties of systems can be traced back to properties of individual states and processes. Temporal logic has achieved a dominant position in describing such properties [17]; with model checking and abstract interpretation as an efficient analysis method. The distinction between the liveness and safety characteristics of Alpern and Schneider is also very useful.

A theory of computer science should, however, be able to formulate and prove much deeper-lying properties on the basis of nontrivial invariants (Section 4) and a specific concept of information flow (Section 5).

A user of a large system needs a modified concept of "correctness": He is not really interested in whether the whole system is correct (many large systems are - for borderline cases - incorrect anyway), but only interested in whether the system is correct for his current one Case delivers correct results. Ideally, the system provides a plausible, understandable reason for this. Classic verification would fail to meet both requirements: A faulty system can also be useful and the reference to successful formal verification of a temporally logically formulated property does not necessarily strengthen confidence in the correctness of an individual use. First suggestions are certifying algorithms [28] and the verification at runtime [4]. A theory of computer science must seek flexible concepts for the concept of correctness and its proof.

11. Time, causality, observation, etc.

Classic real-time models are appropriate for computer-aided real-time systems, for example the control of an airbag. Many modeling techniques argue with a naive concept of time, as if the current time was available with any precision and without additional effort. Lamport [24] softens this view a little without abandoning it entirely. Some modeling techniques, for example ESTEL, ESTEREL and Statecharts, use the hypothesis of “infinitely faster” digital systems because such systems actually work orders of magnitude faster than their human users.

In fact, “time” is often used when conceptually more about cause-and-effect relationships. "Time" and "causality" in systems, as well as "observation" in models are terms, the context of which is not really understood in the context of modeling in computer science. A challenging example is a formal description of the Applesort algorithm (cf. [35]): Apples roll one behind the other over a sloping board with increasingly larger holes. Each apple falls through the first hole, the diameter of which equals or exceeds the diameter of the apple. At n This algorithm classifies the apples into holes n Size classes.

12. Refine and Compose

Large systems are usually refined from abstract requirements or composed of smaller systems. There are a number of general methods, principles and formalisms (e.g. [3]) for the systematic refinement of specifications. The principle of is fundamental for logical specifications Refinement implication: The specification of the refinement implies the specification of the given system. Appropriate methods, principles, and formalisms for composing logical specifications were created for the languageZ [34], Lamports TLA [25] and Broys electricity-based Focus [10] suggested, with the far-reaching suggestion of logically grasping the composition as a conjunction. At the operational level, [32] proposes a general operator for composition that is associative and does not require any specification of the interior of components. All of these methods, principles and formalisms ask correct and important questions, but have not really caught on.

13. Predictability

The theory of computable functions has long been viewed as the basis of a theory of computer science. All attempts to refute the Church-Turing thesis have failed. However, this thesis was often inadmissibly formulated in a more general way. In fact, it only describes an issue relating to the systematic manipulation of character strings [38]. Computer science and thus a theory of computer science goes far beyond the manipulation of character strings; rather, it's about them interpretation of characters in the real world. With the manipulation of character strings, effects in the real world are described and achieved. These relationships are soberly presented in [12]. In a theory of computer science, the theory of calculable functions has an important place - alongside other concepts. This also applies to the formal logic.

14. Computer science as an engineering discipline

An engineering discipline, for example electrical engineering or chemical engineering, is based on a science (in the example physics and chemistry) whose principles, insights, are then made usable for human interests. But software is not such a science. Software is the result of activities in the context of software engineering [7, 9]. Then what science is software engineering based on? One could try “algorithms” and “algorithm engineering”, if this designation did not already stand for a narrow “algorithm” term, and if there were a sufficiently comprehensive term “algorithm”. “Computer science” belongs here as a science, as the basis of various engineering disciplines. One of them is then “software engineering”.

Enough

With this text I would like to arouse interest in the goal of a comprehensive theory of computer science. Physics is the model for such a theory. The basic idea are proposals for formal concepts that go far beyond computing technology and programming, with a viewpoint also from the applications and their limits. The text gives a few suggestions for developing such a comprehensive theory. Further suggestions can be found in the relevant literature. Such a theory of computer science does not make previous principles of computer science obsolete; However, they can be better classified and related to each other, together with future engineering concepts in computer science.

literature

Used literature

  1. 1.

    https://www.omg.org/spec/BPMN/2.0/. Accessed: March 9, 2020

  2. 2.

    Agha G (1986) Actors: a model of concurrent computation in distributed systems. MIT Press, Cambridge, MA

    Book Google Scholar

  3. 3.

    Back R, Wright J (1998) Refinement calculus. Springer, New York

    Book Google Scholar

  4. 4.

    Bartocci E, Falcone Y, Francalanza A, Reger G (2018) Introduction to runtime verification. In: Lectures on runtime verification. Introductory and advanced topics. Lecture notes in computer science, Vol. 10457. Springer, Cham, S 1–33

    Google Scholar

  5. 5.

    Berry G, Boudol G (1990) The chemical abstract machine. In: Proc. ACM-POPL

    Google Scholar

  6. 6.

    Booch G, Rumbaugh J, Jacobson I (2005) The unified modeling language user guide, 2nd ed. Addison Wesley, Boston

    Google Scholar

  7. 7.

    Brennecke A, Keil-Slawik R (1996) (Ed) Position Papers for Dagstuhl Seminar 9635 on History of Software Engineering

  8. 8.

    Brock DC (Ed) (2006) Understanding Moore’s law: four decades of innovation. Chemical Heritage Foundation, Philadelphia

    Google Scholar

  9. 9.

    Broy M (1998) A logical basis for modular software and systems engineering. In: SOFSEM 98

    Google Scholar

  10. 10.

    Broy M, Stolen K (2001) Specification and development of interactive systems. Springer, New York

    Book Google Scholar

  11. 11.

    Carriero N, Gelernter D, Mattson T, Sherman A (1994) The Linda alternative to message-passing systems. Parallel Comput 20 (4): 633-655

    Article Google Scholar

  12. 12.

    Cleland CE (1993) Is the Church-Turing thesis true? Minds Mach 3 (3): 283-312

    Article Google Scholar

  13. 13.

    Cockshott P, Michaelson G (2007) Are there new models of computation? Reply to Wegner and Eberbach. Computer J 50 (2): 232-247. https://doi.org/10.1093/comjnl/bxl062

    Article Google Scholar

  14. 14.

    Dershowitz N, Falkovich E (2014) Generic parallel algorithms. In: Proceedings of computability in Europe. LNCS, Vol. 8493

    Google Scholar

  15. 15.

    Dijkstra EW (1990) Reasoning about programs. In: The distinguished lecture series, academic leaders in computer science and electrical engineering, Vol. III. University Video Communications, Stanford

    Google Scholar

  16. 16.

    Furia CA, Meyer B, Velder S (2014) Loop Invariants: analysis, classification, and examples. Acm Comput Suveys. https://doi.org/10.1145/2506375

    ArticleMATH Google Scholar

  17. 17.

    van Glabbeek RJ (2001) The linear time - branching time spectrum I: the semantics of concrete, sequential processes. In: Handbook of process algebra. Elsevier, Amsterdam (chapter 1)

    Google Scholar

  18. 18.

    Gries D (1981) The science of programming. Springer, New York

    Book Google Scholar

  19. 19.

    Gurevich Y (2000) Sequential abstract state machines capture sequential algorithms. ACM Trans Comput Log. https://doi.org/10.1145/343369.343384

    MathSciNetArticleMATH Google Scholar

  20. 20.

    Hoare CAR (1969) An axiomatic basis for computer programming. CACM 12: 576-583

    Article Google Scholar

  21. 21.

    Hoare T (2010) What can we learn from Edsger W. Dijkstra? Austin Texas, Oct 12, 2010. Edsger W. Dijkstra Memorial Lecture

    Google Scholar

  22. 22.

    Knuth DE (1973) The art of computer programming Vol. 1. Addison-Wesley, Boston

    MATH Google Scholar

  23. 23.

    Küster-Filipe J (2000) Fundamentals of a module logic for distributed object systems. J Funct Log Program 200 (3): 52-62

  24. 24.

    Lamport L (1978) Time, clocks, and the ordering of events in a distributed system. Commun ACM 21 (7). https://doi.org/10.1145/359545.359563

    ArticleMATH Google Scholar

  25. 25.

    Lamport L (2002) Specifying systems. Addison-Wesley, Boston

    MATH Google Scholar

  26. 26.

    Livio M (2009) Is God a Mathematician? Simon & Schuster, New York

    MATH Google Scholar

  27. 27.

    McCarthy J (1963) Towards a mathematical science of computation. In: Proc. IFIP Congress 62nd North-Holland, Amsterdam, S 21

    Google Scholar

  28. 28.

    McConnell RM, Mehlhorn K, Näher S, Schweitzer P (2011) Certifying algorithms. Comput Sci Rev. https://doi.org/10.1016/j.cosrev.2010.09.009

    ArticleMATH Google Scholar

  29. 29.

    Milner R (2005) Software science: from virtual to reality. Bull EATCS 87: 12-16

    Article Google Scholar

  30. 30.

    Naur P, Randell B (Ed) (1969) Software Engineering: Report of a conference sponsored by the NATO Science Committee, Garmisch, Germany, 7-11 Oct. 1968. NATO Scientific Affairs Division, Brussels

    Google Scholar

  31. 31.

    Reisig W (2013) Understanding Petri nets. Springer, Berlin, Heidelberg

    Book Google Scholar

  32. 32.

    Reisig W (2019) Associative composition of components with double-sided interfaces. Acta Inform.https: //doi.org/10.1007/s00236-018-0328-7

    MathSciNetArticleMATH Google Scholar

  33. 33.

    Shepherdson JC (1995) Mechanisms for computing over arbitrary structures. The universal Turing machine

    Google Scholar

  34. 34.

    Spivey JM (1992) The Z notation: a reference manual, 2nd ed. International series in computer science. Prentice Hall, Upper Saddle River, New Jersey

    MATH Google Scholar

  35. 35.

    Stein LA (2005) Interaction, computation, and education. In: Goldin D et al (Ed) Interactive computation - the new paradigm. Springer, Berlin, Heidelberg

    Google Scholar

  36. 36.

    Tel G (2000) Introduction to distributed algorithms, 2nd ed. Cambridge University Press, Cambridge, UK

    Book Google Scholar

  37. 37.