Скачать 160.49 Kb.

Transformational Reducedness and Synthesis of Algorithms and Programs of Symbolic Processing G. E. Tseytlin International Solomon University, Ukraine, Kiev tseytlin@vikno.relc.com Abstract The apparatus of algorithmics algebra is considered, which is a promising avenue at Ukrainian algebraiccybernetic school in the context of establishing modern tools for transformation and synthesis of algorithms and programs in object media. Key words: algebras of algorithms, systems of Glushkov's algorithmical algebras, regular schemes, synthesis of algorithms and programs, objectoriented media. 1. Introduction Within the framework of investigations on algebraic algorithmics [1], which are being intensively developed both in Ukraine and abroad, noteworthy are fundamental works by V.M. Glushkov [2, 3] on systems of algorithmic algebras (SAA) giving rise to algebra of algorithmics [4, 5] being one of promising avenues at Ukrainian algebraiccybernetic school. Modern state of this avenue development is reflected in reviews [6, 7] and is associated with the investigations on the theory of clones as well as with developing tools for algebra of algorithmics [810] for automatization of programming in objectoriented media (OOM) [11, 12]. An important aspect of algebraic investigations consists in analytical transformations of formulas in those algebras [1315]. This article material is presented according to the following structure: In Section 2, the problem definition on transformation of analytical (in formulas) presentations of schemes of algorithms in the algebra of algorithmics is specified. The concept of transformational reducedness being the base of this article is illustrated by simple examples using algorithms of sorting in Section 3. In this case, along with analytical forms to describe algorithms, also textual one (SAAschemes) [2, 4, 8] and graph one [14] are used. In Section 4, the results on adaptiveness of sequential (FIFO) algorithms of sorting, in particular, The Shell method (SM) [1722] are presented. 2. Problem Essence, Problem Definition In this Section we present some problems, typical for algebraic design and synthesis of algorithms and programs. Concepts and basic models investigated in the algebra of algorithmics, to which this review is devoted, are outlined. One of main algebraic problems is to carry out analytical transformations of formulas being superpositions of operations from the signature, by building the legislation of the algebra chosen, i.e. the system of basic equations characterizing main properties of operations included in the signature of the algebra. By considering the application of equivalent transformations, a special interest is presented (along with solving other important problems) by:  reduction of formulas to a canonical (unambiguous) form;  solving the problem of identities of analytical representations in a given algebra;  optimization (in particular, reduction) of formulas according to certain criteria [4]. The formulas in algebras of algorithms are schemes being the models of algorithms or programs. So, the algorithms can be improved by chosen criteria (memory, high speed etc.) as well as by solving other nontrivial problems of algorithmics and programming [5, 6]. In particular, one of basic stages of life cycle of programs is known to be their algorithmization. This stage is realized within the framework of the method of descending (from top to bottom) largeblock design of algorithms and programs. We will note applicability of special metalaws in the process of designing: compression (from bottom to top), development (from top to bottom), reinterpretation (reductiondevelopment), and transformation based on the legislation of a certain algebra of algorithms [4]. The metalaws listed are based on the application of processing strategies used for noninterpreted and partially interpreted algorithm schemes. Analytical, naturallinguistic, and visualized methods for algorithm specification are known:  algorithm description by formulas in the corresponding algebras;  representation of algorithms by means of the correspondingly formatized texts (for example, SAAschemes in SAA/1 language, [3, 8, 4]);  blockschemes or graphschemes of algorithms [16]. In [4] (see Chapter 10) a technology for a common design of algorithms is proposed where these three mutuallycomplementary forms are used. The technology mentioned is realized in the tools of algorithmics [8, 11, 12]. One of avenues in the modern programming technology is based on developing a formalized (in particular, algebraic) design of algorithms and programs [810, 14]. In this case, very topical is, on the one hand, the problem of algorithm designing validification and, on the other hand, the establishment of algorithmics tools, that determines the degree of algorithm and program design automation. The both problems mentioned include the proof of designed algorithm validity and their optimization, in particular, improvement according to certain criteria. The solution of these and a number of other important problems of algorithmics is associated with the transformational and interactive design process [11, 12]. 3. Transformational Reducedness of Simple Algorithms The material of this Section has an illustrative and auxiliary character. In terms of transformational reducedness concerning schematic models applied to simple algorithms of sorting, an interconnection between them is established, that will be used in the next review Sections. Let us demonstrate the process of transformation on simple algorithms of sorting presented by regular (structural) schemes (RS) of algorithms in the Dijkstra algebra (DA) [4] being a reduced version of the Glushkov SAA [24]. The RS mentioned further are transformationally reducible or transformable into each other by using a set of transformations based on identities, quasiidentities, relations to be met by the operations of a chosen algebra of algorithms [4]. Further, the essence of the algorithms considered and their transformations is in the commenting brackets /*...*/. Let us remind that together with the usual Boolean operations, the signature of DA contains:  {[u] А}  cycle of the type "while do";  ([u] А, В) (alternative  construction of the type if  then  else;  А * В  composition (sequential execution of operators А and В). Superposition of operations mentioned will lead to a noninterpreted strategy R, which is presented in DA: R ::= {[u] ([u'] A * B, S)}. As a result of value assignments (interpretations) to the scheme R variables, we will obtain RS of the following algorithm: SOLUTE ::= {[d(P, F)] ([l>r] TRANSP(l, r)* *ALLOC(P, S), R(P))}, where m_{0}: S P a_{1} a_{2} ... a_{n} F is an initial state of the marking[4], on which the following predicates and operators are defined: d(P, F) and l>r are the predicates being true if pointer P reaches marker F, by executing this relation for the elements adjacent to pointer P, respectively, and false if these conditions are not met; TRANSP(l, r) is the transposition of symbols l, r; ALLOC(P, S) is the allocation of P on the marker of the beginning S; R(P) is the shift of pointer P by one element to the right. The interpretations considered is the visible part of tools used for accessing to the data presented in terms of marking, so that S, F are markers fixing the beginning and the end of the array to be sorted, P is the pointer for data processing, which can move over the array. Let us present some transformations of the algorithm considered: SOLUTE' ::= {[d(P, F)] ([l= TRANSP(l, r)*ALLOC(P, S))} (1), /*Essence of RS (1) is that pointer P moves in the nested cycle over the true branch of the alternative until a regular disorder with processing on the alternative branch is detected, or by exit from the cycles with sorted array presentation */ SOLUTE'' ::= {[d(P, F)] ([l= {[l= /* RS (2) consists in the fact that on the false branch of the alternative an operator L(P) is introduced denoting the pointer shift to the left. As a result, an accelerated allocation of the disordered element on the "place" is realized. */ In such a way, an adaptive version of the algorithm SOLUTE is obtained, which can be transformationally reduced to the Dijkstra algorithm being one of the first adaptive algorithms of sorting, which is also named "direct insertions" [17] or SHUTTLE [4, 18], that can be represented by RS: SHUTTLE ::= {[d(P, F)] R(P_{1}, P_{2}) * {[l= where m_{0}: S P_{1} P_{2} a_{1} a_{2} ...a_{n} F is the initial state of marking, on which the following interpretations are defined: L(P_{2}) is the shift of P_{2} by one element to the left; ALLOC (P_{2}, P_{1}) is the allocation of P_{2} on P_{1}; R(P_{1}, P_{2}) is the common shift of pointers P_{1} and P_{2} by an element to the right. /* Essence of the algorithm SHUTTLE is that a disordered element r is allocated on the place in the sorted fragment with subsequent returning of P_{2} on P_{1} and searching of another regular disordered element. A program realization of RS SHUTTLE may be based on simulating shifts P_{1} and P_{2} by means of two stacks, main and auxiliary. A sorted array fragment is accumulated in the main stack, by finding a disordered element (smaller than the top content), and the elements from the main stack are rewritten into the auxiliary one, until the place for allocating a disordered element is detected, after which the content of the auxiliary stack is returned to the main one [4]. */ The algorithm SHUTTLE is transformationally reducible to the following RS: SHUTTLE' ::= {[d(P_{1}, F)] ([l= /* RS (3) represents the algorithm SHUTTLE by means of a cycle with a nested alternative */ Thus, modified RSs presented in this Section are transformationally reducible to each other by means of metarules of algorithmics. In a general form, the property of cycles with a nested alternative is characterized by the following identity: {[u] ([u'] A, B)} = {[u] {[u' + u] B} * {[NOT(u')+u] A}} (4). Among a number of identity (4) transformations noteworthy are the following ones: {[u] ([u'] A, B)} = {[u] ([NOT(u')] {[u' + u] B}, {[NOT(u') + u] A})} (5). /* The right part of identity (5) is a nested into the cycle alternative, whose branches are nested cycles of identity (4). */ {[u] ([u'] A, B)} = {[u] (Ф(NOT(u') * {[u' + u] B}, Ф(u') * {[NOT(u') + u] A})} (6). /* Identity (6) in the external cycle of the righthand part reflects an undetermined interaction of identity (4) branches under the control of filters Ф(z) [3, 4], by validity of the predicates z the filter Ф(z) elaborates an identical operator E, that does not change the state of the data to be processed, otherwise the filter blocks the calculations on a given branch and passes the control to an adjacent process; */ {[u] ([u'] A, B)} = {[u] (Ф(NOT(u')* {[u'+u] B} // Ф(u') * {[NOT(u')+u] A})} (7). /* In contrast to the righthand part of identity (6), in (7) the execution of processes is realized by means of an asynchronous disjunction [3] under the control of filters.*/ RSs of simple algorithms of sorting mentioned earlier may be obtained as a result of due interpretation of identity (5) righthand part, and, hence, their reinterpretation. At the same time, the development of the righthand parts of identities (6) and (7), respectively, leads to nondetermined and parallel versions of these algorithms. 4. Adaptiveness and Transformations of Algorithms of Sorting This Section is devoted to the transformational reducedness of known sortings to adaptive algorithms (that take into account the degree of array order). The results are presented on adaptiveness of sequential sortings, whose RSs, in particular, belong to the Shell method [17, 19]. Let us consider RS BUBBLE of a known multipass algorithm of sorting [4, 17]: BUBBLE ::= {[SAO(P, F)] {[d(P, F)] ([l>r] TRANSP(l,r), E) * R(P)} * ALLOC(P, S)}, where SAO(P, F) is the predicate, true, if the array is sorted, and false, in the opposite case. Other interpretations of a given scheme are the same RSs as those considered in Section 3. /* Essence of algorithm BUBBLE is its multipass property. On each pass a maximal of still disordered elements "emerges" takes its place in an ordered array fragment which is formed on the left from the marker F. It is assumed that marker S is smaller than any sorted array element, while marker F is maximum. */ Let S, S' be a schematic representation of algorithms or strategies of processing in a chosen algebra of algorithms. We denote S S', SS' a transformational reducedness of the scheme S to S', or S' to S by means of an identity Т (respectively). A transitive closure of the transformational reducedness of the scheme S to S' ratio we will denote S =(*)= S'. Lemma 1 [20, 4] RS BUBBLE is transformationally reducible to RS SOLUTE, BUBBLE =(*)= SOLUTE. /* The validity of Lemma is based on transformational reducedness of a multipass RS BUBBLE to a singlepass RS SOLUTE. The process of transformations is based on a number of identities and relations presented in [4, 20]. */ Lemma 2 RS SOLUTE is transformationally reducible to RS SHUTTLE, SOLUTE =(*)= SHUTTLE. /* The validity of Lemma is based on a substantially simplified process of transformational reducedness of RS of the algorithms mentioned, which is presented in Section 3. An initial (more complicated) version of transformations see in [20, 4]. */ Noteworthy is that all the algorithms considered are based on the transposition of adjacent elements that substantially limits their efficiency. Let us consider RS of the algorithm SAI (sorting by alternative insertions), in which the insertion of an element into its place is realized from P_{1} to P_{2} (here P_{1} and P_{2 }are distributed): SAI ::= {[u] {[u + u'] R(P_{1})} * SIP * INS}}, where the basis predicates: u = d(P_{1}, F), u' = (l>r\P_{1}); the basis operators: SIP = ([l(P_{2}) < r(P_{1})] {[r(P_{2})>r(P_{1})] R(P_{2})}, {[l(P_{1}) INS = ALLOC(r(P_{1}), P_{2}) is the insertion on the place (on P_{2}) of the element r\P_{1}. /* Essence of RS SAI is that (in contrast to RS SHUTTLE) pointer P_{1} is shifted over the array from left to right to search for a disordered element r(P_{1}), while pointer P_{2}, by shifting to the right or to the left depending on r(P_{1}), finds the place to insert r(P_{1}) into an already ordered fragment. */ Lemma 3 [20, 4] RS SHUTTLE is transformationally reducible to RS SAI, SHUTTLE =(*)= SAI. /* The validity of Lemma is based on the transformational reducedness of adaptive singlepass algorithms SHUTTLE and SAI to each other. Here a number of identities and relations are used, which are presented in [4, 22]. */ The validity of the following statement is based on Lemmas 13: Theorem 1 The algorithm BUBBLE is transformationally reducible to an adaptive algorithm SAI being more efficient than the adaptive SHUTTLE, BUBBLE =(*)= SHUTTLE =(*)= SAI. Let us consider the Shell method (SM) being known as rather efficient method of sorting, that is based not on transposition of neighbours, but elements located at a distance k. We will present this method by the following RS: SM ::= INIT * {[q] RP(k) * SORT(k) * ALLOC(S)}* *ФИН, (8) where INIT is an operator of initialization with the selection of the initial meaning of the parameter k:=m (in particular, k:= n, where n is the array length); q is the predicate, being true in the case if the sorted array is ordered; RP (k) ::= {[d(P_{2}, P_{1}) = m] R(P_{1})} is the operator of distribution of pointers on m of elements, m = k/2 is the integer part of division; R(P_{1}) is the shift of P_{1} by a symbol to the right; SORT(k) is RS of algorithms of sorting of subarrays: m_{i}: S P_{2} P_{1} a_{i} a_{(i+k)} a_{(i+2k)} ... a_{mk +1} ... a_{n} F, (9), here (i=1,2,...,k), it is assumed that S < a < F for any element a of the sorted array, and by reaching marker F the pointers do not shift further; ALLOC(S) is the operator of allocation of P_{2} and P_{1} on the marker of beginning. In turn, RS SORT(k) may be interpreted, in particular, into any of the algorithms considered earlier, for example, into bubble sorting of distributed subarrays: BUBBLE(k) ::= ( k := n) * {[SAO(k)] {[d(P1, F)] ([r\P_{2}>r\P_{1}] TRANSP(r\P_{2}, r\P_{1}), Е) *R(P_{1}, P_{2})}*{[d(P_{2}, S)] L(P_{2}, P_{1})}}, where (k := n) is the interpretation of INIT (n is the array length); SAO(k) is the predicate, true if all the elements are ordered at a distance k from each other; d(P_{1}, F) (d(P_{2}, S)) is the predicate, true if P_{1} reaches marker F and (P_{2}  marker S, respectively); r\P_{1} (r\P_{2}) is the element just to the right of P_{1} (P_{2}, respectively); R(P_{2}, P_{1}) (L(P_{2}, P_{1}) is the operator of shift of both pointers by one symbol to the right (left, respectively); TRANSP(r\P_{1}, r\P_{2}) is the operator of transposition of the elements viewed by the pointers, in the case of their disorder. /* In accordance with RS BUBBLE(k) maximal elements of distributed subarrays (9) emerge at each wind of the cycle, in agreement with usual bubble sorting. Quite similar may be realized the interpretation of RS SM also for other algorithms of sorting considered earlier. */ The following statement gives the solution of the problem of constructing adaptive algorithms of sorting using the Shell method [19, 18]: Theorem 2 Within the framework of RS SM (8), the algorithm BUBBLE (k) is transformationally reducible to an adaptive algorithm SAI(k), being more efficient than the adaptive SHUTTLE (k), BUBBLE(k) =(*)= SHUTTLE(k) =(*)= SAI(k). 5. Parallel Calculations, Tools of Algorithmics This Section is devoted to the tools for design and synthesis of algorithms and programs (sequent and parallel). Synchronous and asynchronous versions of pipeline, rotary, and pile strategies of multiprocessing are reviewed. The presentation is illustrated by core examples considered earlier [2125]. In the case of parallel calculations of great importance is the factor of time. Time is quantized, is divided into separate, equal in length, spans (quanta). In the case of synchronous multiprocessing, a group of uniform operations is carried out during one quantum, in other aspects the processing is performed sequentially. In the asynchronous case, several parallel processes are performed independently on different noncrossed distributed fragments (dataflows). So, the basic idea of the synchronous sorting of neighbours is parallel comparison and transposition of all the disordered pairs of elements with a oneside shift of the array (to the left or right) up to the array ordering as a whole [23]. In the asynchronous case, functional distribution on processes and data to be processed is realized. For example, SAI/а, an asynchronous version of algorithm SAI (see Sect. 4), means that pointer P1 would find disordered elements in the array and then send them into a queue; at the same time, the elements from the queue using P2 are allocated on the places up to the moment when the array sorting process is completed. A synchronous pipeline belongs to known parallel structures. The data are allocated on the input line, then they sequently come to transport to be synchronously processed when being accessed by parallel processes up to the array ordering. Finally, the resulting array is swapped out onto the output line. In [24, 25] a model of an asynchronous pipeline is proposed, where:  transport is replaced by independently working parallel processes;  data exchanges between neighbouring processes are realized through the queue;  multiprocessing is completed when all the queues are emptied and the resulting data are swapped out from the very right queue onto the output line. So, the model of an asynchronous pipeline is used in the mentioned above algorithm SAI/а. Noteworthy is also parallel version of RS SM, oriented towards multiprocessing of subarrays m_{i}, i = 1,2,...,k, see (9). Sequential and parallel RSs considered in this review may be reinterpreted into other data domains, in particular, searching algorithm classes [4, 22]. In [26] algorithms of parallel searching in the Internet have been realized. In [27] (see also [28, 4]) estimations of sequential and parallel algorithms obtained are presented. Thus, the following statement is true. Theorem 3 Sequential algorithms obtained in Sections 3 and 4 may be transformationally reduced to parallel ones, and then by means of reinterpretation extended to other classes of problems; Calculation complexity of the algorithms obtained has been analyzed. In [4, 11, 12, 2628], modern designs aimed at developing algebra of algorithmics tools are reflected, which run back to the synthesiser of programs in target programming languages Multiprocessist [810]. As target languages for the synthesiser Multiprocessist were used:  languages of an imperative type (Pascal, C, Assembler etc.);  nontraditional languages of logical, parallel, and belonging to the programming technology, respectively: Utopist, parallel Fortran, Rtran;  languages for: CNC machines, stamping processes, design of opticomechanical tracts etc. [8]. Modern state of automation in design and synthesis of algorithms and programs is aimed at developing integrated tools, that could substantially increase the abilities of the synthesiser Multiprocessist and combine:  developed algebra of algorithmics apparatus;  tools of automation and programming in OOM;  methods of descending, ascending, and combined design and synthesis of algorithms and programs. It should be stressed that:  the algebra of algorithmics apparatus, together with the working system of identities, quasiidentities, and relations, as well as lemmas and theorems deduced from them, includes interconnected and complemented analytical, textuallinguistic (e.g., SAAschemes), and flowgraphic descriptions of algorithms and programs;  to the tools of OOM belongs, first of all, UML – the universal language of simulation including adequate to flowgraphs visual means of presentation of algorithms and programs, as well as IBM Rational Rosesystem of simulation automation in UML containing general and special libraries and subsystems;  methods of design and synthesis from top to bottom, from bottom to top, and combined ones provide increase in quality of synthesised objects at the expense of combining advantages and levelling disadvantages of tools used separately [2729]. 6. Conclusion In this review, the results on one of basic algebra of algorithmics concepts are considered. This concept is transformational reducedness of algorithm and program schemes. A number of works were devoted earlier to automation problems in application of identical transformations for SAA (see, for example, [9, 10]). However, nowadays the solution of this problem gains new importance in connection with the development of tools for algebra of algorithmics in OOM. This is just the aspect the present review is devoted to. Together with the demonstration of the transformational reducedness essence, using the examples of simple algorithm of sorting transformations (see Sect. 3), also the results on transformational reducedness to efficient adaptive algorithms, both sequential, and parallel (see Sect. 4, 5), are presented. The state of works on tools for algebra of algorithmics, aimed at designing and synthesising algorithms and programs in OOM, is characterized as well. A significant attention was paid to the problems of teaching the elements of algorithmics in combination with the hardware solutions directed to synthesize instruments of various design (see for example, [30]). Let us list some topical problems belonging to this avenue of investigations: 1. Formation of functional completeness criteria in algebra of algorithmics subalgebras, which are associated with the construction of algebras of algorithms, meeting the paradigms of modern programming [6, 7, 31]. 2. Constructing lattices of subalgebras in the algebras of formal languages and associated automaton structures to specify known Chomsky classification and extend it to algebra of algorithmics. 3. Constructing applied algebras of algorithms to solve a number of problems belonging to symbolic multiprocessing and constructing efficient language processors. 4. Development of modern educational tools (including distance training in Internet) based on methods and tools of algebra of algorithmics. 5. Basing on the algorithmics apparatus, the development of systems, analogous to the tools "VIKNO v SVIT", for sound friendly interface with various categories of persons having different physical limitations. References [1] П. Ноден, К. Китте, Алгебраическая алгоритмика, Мир, Москва, 1999. [2] В.М. Глушков, “Теория автоматов и формальные преобразования микропрограмм”, Кибернетика, №5, 1965, pp. 110. [3] В.М. Глушков, Г.Е. Цейтлин, Е.Л. Ющенко, Алгебра. Языки. Программирование, Наукова думка, Киев, 1989. [4] Г.Е. Цейтлин, Введение в алгоритмику, Сфера, Киев, 1998. [5] Г.О. Цейтлiн, Алгебра логiки та конструювання програм, Наукова думка, Київ, 1994. [6] Г.Е. Цейтлин, “Алгебраическая алгоритмика: теория и приложения”, Кибернетика и системный анализ, №1, 2003, pp. 818. [7] Г.Е. Цейтлин, “Алгебры Глушкова и теория клонов”, Кибернетика и системный анализ, №4, 2003, pp. 4858. [8] Е.Л. Ющенко, Г.Е. Цейтлин, В.П. Грицай, Т.К. Терзян, Многоуровневое структурное проектирование программ: Теоретические основы, инструментарий, Финансы и статистика, Москва, 1989. [9] Г.М. Кирсанов, Г.Е. Цейтлин, Е.Л Ющенко, “АНАЛИСТ – пакет программ для доказательства тождеств теорем в аксиоматизированных САА”, Кибернетика, №4, 1979, pp.2833. [10] А.Н. Петрушенко, “Об одном подходе к проблеме автоматизации оптимизирующих преобразований алгоритмов и программ”, Кибернетика и системный анализ, №5, 1991, pp.127137. [11] Г.Е. Цейтлин, А.С. Мохница, “Что такое алгебраическая алгоритмика?”, Проблемы программирования, №23, 2004, pp. 5259. [12] Е.А. Яценко, А.С. Мохница, “Инструментальные средства конструирования синтаксически правильных параллельных алгоритмов и программ”, Проблемы программирования, №23, 2004, pp. 444450. [13] А.А. Летичевский, “Функциональная эквивалентность дискретных преобразователей”, Кибернетика, №2, 1969, pp. 1517; №2,1970, pp. 1429; №1, 1972, pp. 15. [14] Ю.В. Капитонова, А.А. Летичевский, Математическая теория проектирования вычислительных систем, Наука, Москва, 1988. [15] Г.Е. Цейтлин, “Проблема тождественных преобразований схем структурированных програм с замкнутыми логическими условиями”, Кибернетика, №3, 1978, pp. 5057; №4, 1978, pp. 1018; №5, 1979, pp. 4451. [16] Л.А Калужнин, “Об алгоритмизации математических задач”, Проблемы кибернетики, №2, 1959, pp. 5169. [17] Д. Кнут, Искусство программирования для ЭВМ, Т. 3 Мир, Москва, 1978. [18] Г. Лорин, Сортировка и системы сортировки, Наука, Москва, 1983. [19] V. EstivillCastro, D. Wood, A Survey of Adaptive Sorting Algorithms, ACM Computing Surveys, Vol.24, №4, December 1992, pp. 441476. [20] Г.Е. Цейтлин, “Формальная трансформация структурированных алгоритмов: сортировки”, Программирование, №2, 1985, pp. 7991. [21] Г.Е. Цейтлин, “Проэктирование алгоритмов паралельной сортировки”, Программирование, №6, 1989, pp. 419. [22] Г.Е. Цейтлин, “Поиск и сортировка: классификация, трансформация, синтез”, Автоматика и телемеханика, №4, 1992, pp. 147154; №5, 1992, pp. 156165 [23] В.М. Глушков, Г.Е. Цейтлин, Е.Л. Ющенко, Методы символьной мультиобработки, Наукова думка, Киев, 1980. [24] Г.Е. Цейтлин, “Конструирование алгоритмов символьной обработки”, Кибернетика и системный анализ, №2, 1993, pp. 1729. [25] Г.Е. Цейтлин, Теория алгоритмов и вычислительных процессов. Часть 2. Параллельные вычисления, ИПЦ МСУ, Киев, 2004. [26] Г.Е. Цейтлин, Е.А. Яценко, “Элементы алгебраической алгоритмики и объектноориентированный синтез параллельных программ”, Математические машины и системы, №2, 2003, pp.6476. [27] О.А. Яценко, “Iнтегрованi алгеброалгоритмiчнi моделi мультиобробки”, Вiсник Київського нацiонального унiверситету. Серiя: фiзикоматематичнi науки. Спец. випуск за матерiалами Мiжнародної конференцiї “Теоретичнi та прикладнi аспекти побудови програмних систем“ (TAAPSD'2004), 2004, pp. 102107. [28] Г.Е. Цейтлин, Т.К. Терзян, Л.М. Захария, “Инструментарий конструирование экспертных систем символьной обработки”, Математические машины и системы, №1, 1997, pp. 1425. [29] Г.Е. Цейтлин, С.Ф. Теленик, А.А. Амонс, “Алгебрологическая формализация в объектноориентированных технологиях”, Проблемы программирования, №12, 2002, pp. 136146. [30] J. Bača, Š. Korečko, P. Václavík, J. Porubän, “Didactic Version of Program System for Synthesis and Diagnostics of Logic Circuits”, Проблемы программирования, №3, 2003, pp.53  58. [31] Ю.В. Капитонова, А.А. Летичевский, Парадигмы и идеи академика В.М. Глушкова, Наукова думка, Киев, 2003. 