УДК: 004.430
Parallel computing and OpenMPI
Kurmanova Zhanna
Parallel Computing is evolved from serial computing that attempts to emulate what has always been
the state of affairs in natural World. We can say many complex irrelevant events happening at the same
time sequentionally. For instance; planetary movements, automobile assembly, galaxy formation,
weather and ocean patterns.
Historically, it is considered to be “the high end of computing” and has been used to model difficult
scientific, computational and engineering problems.
In computational field technique which is used for solving the computational tasks by using different
type multiple resources simultaneously is called as parallel computing. It breaks down large problem into
smaller ones, which are solved concurrently [1].
The another term related to parallel computing is parallel processing. It describes
simultaneous use of more than one central processing units in order to execute a program.
There are many possible ways of designing a parallel computer, and Michael Flynn in 1966
developed a taxonomy for parallel processors, a way of thinking about these alternatives. Flynn
categorized them based on two parameters: the stream of instructions (the algorithm ) and the stream of
data (the input). The instructions can be carried out one at a time or concurrently, and the data can be
processed one at a time or in multiples. In Flynn's scheme, SISD is "Single Instruction stream, Single
Data stream," and refers to a traditional sequential computer in which a single operation can be carried
out on a single data item at a time.
The two main categories of parallel processor are SIMD and MIMD. In a SIMD (Single Instruction,
Multiple Data) machine, many processors operate simultaneously, carrying out the same operation on
many different pieces of data. In a MIMD (Multiple Instruction, Multiple Data) machine, the number of
processors may be fewer but they are capable of acting independently on different pieces of data. The
remaining category, MISD (Multiple Instruction, Single Data) is rarely used since its meaning is not
clearly defined. Since it implies that several instructions are being applied to each piece of data, the term
is sometimes used to describe a vector supercomputer in which data pass through a pipeline of
processors each with a different instruction [2].
Main memory in a parallel computer is either shared memory (all processors have equal access to
all memory), or distributed memory (each processor has its own memory).
A huge number of software systems have been designed for programming parallel computers, both
at the operating system and programming language level. These systems must provide mechanisms for
partitioning the overall problem into separate tasks and allocating tasks to processors. Such mechanisms
may provide either implicit parallelism - the system (the compiler or some other program) partitions the
problem and allocates tasks to processors automatically or explicit parallelism where the programmer
must annotate his program to show how it is to be partitioned. It is also usual to provide synchronisation
primitives such as semaphores and monitors to allow processes to share resources without conflict[3].
Load balancing attempts to keep all processors busy by moving tasks from heavily loaded processors
to less loaded ones.
Communication between tasks may be either via shared memory or message passing.
Message Passing Interface (MPI, the Message Passing Interface) - programming interface (API) for
the transmission of information that allows you to exchange messages between processes that perform a
single task.It is designed by William Grouppom, Evin Lusk (English) and others.
MPI is the most widespread standard communications interface of the parallel programming. It has
implementations for a large number of computer platforms. Used to develop programs for clusters and
supercomputers. The primary means of communication between processes in the MPI is to pass messages
to each other. Nowadays. There is a large number of free and commercial implementations of MPI such
as Fortran 77/90, Java, C and C + + [4].
Generally, MPI oriented to the systems with distributed memory, i.e. where the cost of large data,
while OpenMP oriented system with a common memory (shared with multi cache). Both technologies
can be used together in order to make optimal use of multi-core systems in the cluster.
The base mechanism of communication between the MPI processes is to send and receive
messages. Messages contains information, selected data that is shared between processes.There are:
sender - the rank (id) of sender;
receiver - the rank of the receiver;
tag - can be used to separate different types of messages;
communicator - the code of the process group.
As an example there was chosen a problem with matrix multiplication. According to conditions,
matrices should have NxN size(where N at least is equal to 100). Then user defines number of processes
from console, program fills matrices randomly with numbers from o to 10.After standard calculations
result should be obtained.
An
algorithm
of
a
program:
1. Initialize MPI using the MPI_Init (), get information on the number of processors and the
number of current processor (function MPI_Comm_size and MPI_Comm_rank respectively).Calculate
number of worker processes (numberOfWorkers = numberOfTasks-1)
2. The Master process fills matrices A and B with random numbers.Then it realizes decomposition
according to the chosen algorithm where it calculates average rows, offset to a matrix and extra rows
number.
averageRow = N/numberOfWorkers; //average rows per worker
extra= N%numberOfWorkers; //extra rows
offset = 0;
This data with a matrix B will be sent to worker processes to obtain calculations.
3. After the last message Master switches to receive messages. Each worker process after
initialization is waiting for input data, then calculates and sends the result back to Master process
4. The Master receives all the messages and outputs a result.
The code below demonstrates matrix multiplication in two cases. For better visualization of
differences between sequential and parallel computing it is preferred to choose a big matrix size(at least
100x100). The application with sequential execution terminated at 0,017791 seconds, while the run-time
of parallel executed program was 0,008055 seconds which more than 2 times faster than the first one.
C. MPI Program Example
Filename: MPI_Matrix.c
#include
#include
#include
#define N 100 /* number of rows and columns in matrix */
MPI_Status status;
int a[N][N],b[N][N],c[N][N]; //matrix used
main(int argc, char **argv){
struct timeval start, stop;
int numberOfTasks,mtype, taskID,numberOfWorkers,source,destination,rows,averageRow,extra,
offset,i,j,k,x,z, mr, mc;
//first initialization
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &taskID);
MPI_Comm_size(MPI_COMM_WORLD, &numberOfTasks);
numberOfWorkers = numberOfTasks-1;
//---------------------------- master ----------------------------//
if (taskID == 0) {
printf("\n matrix a \n");
printf("--------------\n");
for (i=0; i for (j=0; j
a[i][j]= rand() % 3;
for(i=0; i {
for(j=0; j printf("%3d", a[i][j]);
printf("\n");
}
printf("\n matrix b \n");
printf("--------------\n");
for(x=0; x for(z=0; z
b[x][z] = rand() % 3;
for(x=0; x {
for(z=0; z printf("%3d ", b[x][z]);
printf("\n");
}
/* send matrix data to the worker tasks */
gettimeofday(&start, 0);
averageRow = N/numberOfWorkers; //average rows per worker
extra= N%numberOfWorkers; //extra rows
offset = 0;
for (destination=1; destination<=numberOfWorkers; destination++){
if(destination<=extra){
rows = averageRow+1;
}else{
rows = averageRow;
}
mtype = 1;
MPI_Send(&offset, 1, MPI_INT, destination, mtype, MPI_COMM_WORLD);
MPI_Send(&rows, 1, MPI_INT, destination, mtype, MPI_COMM_WORLD);
MPI_Send(&a[offset][0], rows*N, MPI_INT,destination,mtype, MPI_COMM_WORLD);
MPI_Send(&b, N*N, MPI_INT, destination, mtype, MPI_COMM_WORLD);
offset = offset + rows;
}
/* wait for results from all worker tasks */
for (i=1; i<=numberOfWorkers; i++){
mtype = 2;
source = i;
MPI_Recv(&offset, 1, MPI_INT, source, mtype, MPI_COMM_WORLD, &status);
MPI_Recv(&rows, 1, MPI_INT, source, mtype, MPI_COMM_WORLD, &status);
MPI_Recv(&c[offset][0], rows*N, MPI_INT, source, mtype, MPI_COMM_WORLD, &status);
}
gettimeofday(&stop, 0);
printf("Result matrix is:----------------------------------------------\n");
for( mr=0; mr {
for( mc=0; mc printf("%3d ", c[mr][mc]);
printf("\n");
}
fprintf(stdout,"Time = %.6f\n\n",(stop.tv_sec+stop.tv_usec*1e-6)-(start.tv_sec+start.tv_usec*1e-
6));
}
/*---------------------------- worker----------------------------*/
if (taskID > 0) {
source = 0;
mtype = 1;
MPI_Recv(&offset, 1, MPI_INT, source, mtype, MPI_COMM_WORLD, &status);
MPI_Recv(&rows, 1, MPI_INT, source, mtype, MPI_COMM_WORLD, &status);
MPI_Recv(&a, rows*N, MPI_INT, source, mtype, MPI_COMM_WORLD, &status);
MPI_Recv(&b, N*N, MPI_INT, source, mtype, MPI_COMM_WORLD, &status);
/* Matrix multiplication */
for (k=0; k for (i=0; i c[i][k] = 0;
for (j=0; j c[i][k] = c[i][k] + a[i][j] * b[j][k];
}
mtype = 2;
MPI_Send(&offset, 1, MPI_INT, 0, mtype, MPI_COMM_WORLD);
MPI_Send(&rows, 1, MPI_INT, 0, mtype, MPI_COMM_WORLD);
MPI_Send(&c, rows*N, MPI_INT, 0, mtype, MPI_COMM_WORLD);
}
MPI_Finalize();
}
Sequantially executed program example
Filename:Matrix.c
#include
#include
#include
#define N 100
int a[N][N],b[N][N],c[N][N];
int main(){
struct timeval start, stop;
int i,j,mc, mr, k, rows, x, z;
printf("\n matrix a \n");
printf("--------------------------------------\n");
for (i=0; i
for (j=0; j
a[i][j]= rand() % 10;
for(i=0; i {
for(j=0; j printf("%3d", a[i][j]);
printf("\n");
}
printf("\n matrix b \n");
printf("----------------------------------------\n");
for (x=0; x
for (z=0; z
b[x][z]= rand() % 10;
}
for(x=0; x {
for(z=0; z printf("%3d", b[x][z]);
printf("\n");
}
gettimeofday(&start, 0);
for (k=0; k for (i=0; i c[i][k] = 0;
for (j=0; j
c[i][k] = c[i][k] + a[i][j] * b[j][k];
}
gettimeofday(&stop, 0);
printf("Result matrix is:----------------------------------------------\n");
for( mr=0; mr {
for( mc=0; mc printf("%3d ", c[mr][mc]);
printf("\n");
}
fprintf(stdout,"Time = %.6f\n\n",(stop.tv_sec+stop.tv_usec*1e-6)-(start.tv_sec+start.tv_usec*1e-6));
return 0;
}
Pic.1 Output of MPI_Matrix.c.
Pic.2 Output of Matrix.c
We need to follow several instructions to install OpenMPI packet on our computer which are
perfomed on Ubuntu operating system.
1. Install OpenMPI
$ sudo apt-get install libopenmpi-dev openmpi-bin openmpi-doc
$ sudo apt-get install ssh
2. Configure SSH
$ ssh-keygen -t dsa
$ cd ~/.ssh
$ cat id_dsa.pub >> authorized_keys
3. Compilation
$ mpicc MPI_Matrix.c -o MPI_Matrix
4. Execution
$ mpiexec -n 5 MPI_Matrix [5].
As you can see, properly designed programs with parallel execution seems to be faster than their
sequential counterparts, which seems to be a market advantage. In this case, it will save money and time
, solve large problems. Therefore, the near future will see the increased use of parallel computing
technologies at all levels of computing which will result in the extended use of parallel computers in all
areas of human activity such as financial modelling, data mining and multimedia systems, in addition to
traditional application areas of parallel computing such as scientific computing and simulation.
References:
1.
http://en.wikipedia.org/wiki/Parallel_computing
2.
http://www.encyclopedia.com/topic/parallel_processing.aspx
3.
http://neohumanism.org/p/pa/parallel_computing.html
4.
http://ru.wikipedia.org/wiki/Message_Passing_Interface
5.
http://varuagdiary.blogspot.com/2011/05/open-mpi-on-ubuntu.html
УДК 400.240
FINANCIAL SYSTEM MODEL FROM SYSTEM DYNAMICS PERSPECTIVE
G. Ilyin, Zh. Tlepbergenova
Kazakh-British Technical University
Abstract
Financial systems are crucial to the allocation of resources in a modern economy. They channel
household savings to the corporate sector and allocate investment funds among firms.
There are several measures which define the financial health of an organization. But the importance of
Net cash flow, Gross income, Net income, Pending bills, Receivable bills, Debt, and Book value can
never be undermined as they give the exact picture of the financial condition. While there are several
approaches to study the dynamics of these variables, system dynamics based modeling and simulation is
one of the modern techniques. A system dynamics perspective has a powerful logic that offers substantial
improvements in dealing with issues in strategic management, whether one-off challenges or the
continuous direction of enterprise strategy. The paper explores this method to simulate the before
mentioned parameters during production capacity expansion in an electronic industry. First the simple
model of financial system is developed, followed by the explanation of simulation of model.
Introduction
A financial system can be defined at the global, regional or firm specific level. The firm's financial
system is the set of implemented procedures that track the financial activities of the company. On a
regional scale, the financial system is the system that enables lenders and borrowers to exchange funds.
The global financial system is basically a broader regional system that encompasses all financial
institutions, borrowers and lenders within the global economy.
System Dynamics is a visual and analytical way of investigating how complex systems work. It
models the relationships between elements in a system and how these relationships influence the
behavior of the system over time. There are a number of forces which make the issue of financial system
design extremely complicated and various approaches have been floated into this research domain.
Fundamental and radical changes in the financial industries, such as deregulation and concurrent
advances in technology, have made a visible impact on the provision of financial services and the way
the financial terms are used to enhance the system performance. Deregulation, in various parts of the
world, has made flexible the provision of financial services and promoted competition among financial
institutions and advances in technology has increased profitability and facilitated faster processing and
monitoring of multiple activities at even lower costs. Concurrently, a thorough study of financial terms
has been practiced since the past several decades, as it provides direct benefits such as: enhanced
leverage particularly in the form of tax benefits, support in terms of restructuring or economic downturns
because of long-term lender relations, ability to borrow more for long-term project purposes, and
increase in investment efficiency. Allocation decisions are also vital because they are the basis for future
success or failure of the company.
Background
The financial dynamics modeled and simulated in this paper revolve around the following variables,
which have influence on business performance. There have been extensive study on the influence of key
financial terms on business performance since past several decades, and research in direction is an
ongoing endeavor. In the context of this research, following are the variables of interest.
Taxable income is generally the gross income or adjusted gross income minus any deductions,
exemptions or other adjustments that are allowable in that tax year. Taxable income is also generated
from appreciated assets that have been sold or capitalized during the year and from dividends and interest
income. Income from these sources is generally taxed at a different rate and calculated separately by the
tax entity.
Net income - also referred to as the bottom line, net profit, or net earnings - is an entity's income
minus expenses for an accounting period. It is computed as the residual of all revenues and gains over all
expenses and losses for the period, and has also been defined as the net increase in stockholders equity
that results from a company's operations.
Net cash flow refers to the difference between a company's cash inflows and outflows in a given
period. In the strictest sense, net cash flow refers to the change in a company's cash balance as detailed
on its cash flow statement.
Since the early days, the impact of capital structure on the value of the firm has been a puzzling
issue in corporate finance. A review of the literature suggests that non-interest income is not only a
function of size of the industry, credit risk, interest rate risk, liquidity risk, overheads, loan loss
provisions, and before tax profit, but also, has an important bearing on the debt structure The capital
structure of a company is usually leveraged by the ratio of debt and equity. There is also an argument that
the role of debt is in conveying inside information to the market .The capital structure consists of
companies obtaining funds or capital from equity or the combination of debt and equity. The cost of debt
capital generally can be determined as the interest rate being charged for the long-term debt. Liu (2000)
empirically compared the debt service capacity indicators between the most important factors that cause
financial crises. According to him the critical factor that causes financial crisis is short-term debt to total
debt ratio.
Book value is the value of an asset according to its balance sheet account balance. The value is
based on the original cost of the asset less any depreciation, amortization or Impairment costs made
against the asset. Book value is its total assets minus intangible assets and liabilities. However, in
practice, depending on the source of the calculation, book value may variably include goodwill,
intangible assets, or both . Most of the time the fact that the Book value can be tangible or intangible is
ignored in financial calculations. When intangible assets and goodwill are explicitly excluded, the metric
is often specified to be tangible book value. In the current analysis the Book value is taken as New
investment minus the Tax depreciation and refers only to the tangibles.
Modeling of Financial System
The system dynamics modeling scenario chosen in this analysis is that of a hypothetical electronic
system manufacturer who aims at an expected annual production of about 8000 units in the next five
years from a current production of about 1100 units. The company plans for an annual increase rate of
production by 10% to 40% per year through augmentation of production equipment and wants to
simulate the financial dynamics with the specific variables of interest. Revenue for the manufacturer is
basically through the production sales. Unit price of sales (US$) is taken for convenience as the
simulation figures may be multiplied by the selling price for realistic values. The dynamics involved
considers Receivable bills and Pending bills the difference of which is actual Billings. Variable cost per
unit is assumed to be about 60% of the unit selling price. Pending bills will be the actual billing minus
the production revenue and Receivable bills is the Billings minus the sum of the receivable cash and
losses (with a loss rate of about 6%). The rate of Receivable cash is calculated in terms of the payment
delay which is considered slightly over a month. The stock and flow diagram (Figure 1) indicates the
interrelationship between the variables of research interest. Some key formulas are as follows:
Taxable income = Gross income – (Variable costs + Losses + Interest payments + Tax
Depreciation).
Net income = Taxable income – Taxes.
Net cash flow = Receivable cash + Loans – (New investment + Variable costs + Interest payments
+Repayment rate + Taxes).
Debt = Loans - Repayment rate.
Book value = New investment - Tax depreciation.
Simulation is carried out to study the influence of the planned increase of production on: Pending
bills, Receivable bills, Net cash flow, Gross income, Net income, Debt and Book value.
Достарыңызбен бөлісу: |