● Технические науки
№5 2014 Вестник КазНТУ
178
WRTC = OFF ; Configuration registers (300000-3000FFh) not write-protected
WRTD = OFF ; Data EEPROM not write-protected
EBTR0 = OFF ; Block 0 (000800-001FFFh) not protected from table reads executed in other blocks
EBTR1 = OFF ; Block 1 (002000-003FFFh) not protected from table reads executed in other blocks
EBTR2 = OFF ; Block 2 (004000-005FFFh) not protected from table reads executed in other blocks
EBTR3 = OFF ; Block 3 (006000-007FFFh) not protected from table reads executed in other blocks
EBTRB = OFF; Boot block (000000-0007FFh) not protected from table reads executed in other blocks
Config_End
PortB_Pullups
= off ' Выключить подтягивающие резисторы на PORTB
Declare All_Digital
= On ' Каждый порт выполняет свою функцию по умолчанию
'--------------Настройки подключения ЖКИ--------------------------
Declare LCD_Type
ALPHA 'Тип ЖКИ - буквенно-цифровой
Declare LCD_DTPin
PORTB.4 'Порт данных ЖКИ
Declare LCD_ENPin
PORTB.1 'Управление линией E
Declare LCD_RSPin
PORTB.0 'Управление линией RS
Declare LCD_Interface
4 'Разрядность шины данных
Declare LCD_CommandUs
2000 'Задержка перед посылкой команды
Declare LCD_DataUs
50 'Задержка перед посылкой данных
Declare LCD_Lines
2 'Количество строк ЖКИ
Symbol
MainCodeStart = $1000 '$1100'$0800'$1100'$0B80;
PROTON_START_ADDRESS = MainCodeStart; Tell Proton were to compile to
Declare Bootloader
= off ' Выключить загрузчик
;-------------------------- Настройки портов --------------------------
TRISC.2=0
TRISB.2=1
TRISC.0=1
TRISA.1=0
TRISA.2=0
TRISA.3=0
TRISA.4=0
TRISA.5=0
TRISA.0=0
TRISC.1=0
TRISC.2=0
TRISC.6=0
TRISC.7=0
TRISB.3=0
'--------------------------Объявления переменных---------------------
Dim
var1 As Word ' Объявить переменную VAR размером Word
Symbol
Pin = PORTA.0 ' Присвоить символ pin выводу PORTA.0
'--------------------------Главная программа--------------------------
PORTC.2=0
var1=0
Print At
1, 1, "pulsometr"
DelayMS
100
Loop:
var1=Counter Pin, 15000 ' Считать импульсы за 100 мсек на выводе PORTB.0
var1=var1*4
Print At
1, 1, "puls"
Print At
2, 1, "puls= ", Dec var1, " "'Вывести во 2-ой строке ЖКИ десятичное значение VAR1
If
var1<40 Then PORTC.2=1
If
var1>40 Then PORTC.2=0
GoTo
Loop ' Повторить процесс измерения
Основной командой является команда Counter, согласно которой в течении 1000 мсек.
считывается количество импульсов, поступающих на порт С.0 и считанное количество присваивается
переменной puls. Далее с переменной можно производить арифметические функции. В нашем случае,
● Техникалыќ єылымдар
ЌазЎТУ хабаршысы №5 2014
179
количество считанных импульсов за период 15 секунд, присваиваются переменной var1 и
умножаются на 4. Таким образом, мы вычисляем количество импульсов за 1 минуту, т.е. получаем
пульс. Сравнив считанный пульс с заданным числом, которое соответствует уровню пульса при
засыпании испытуемого, можно предпринять соответствующие действия. В нашей программе этот
уровень установлен равным 40.
Проведение экспериментальных работ.
При сборке устройства основной проблемой является точная настройка датчика пульса на ИК
излучателе и приемнике. Так как, колебания происходят в очень малом диапзоне, необходимо
изготовить конструкцию, которая сможет обеспечить надежное закрепление датчика на измеряемой
поверхности, например на мочке уха, и обеспечить защиту от попадания на фотоприемник
посторонних световых излучений, ухудшающих работу устройства. Это условие должно соблюдатся
совмесно с миниатюрностью пульсометра, так как основной целью устройства, является измерение
пульса в непрерывном и мобильном режиме. Если датчик пульса буде закреплен на мочке уха, то
желательно его изготовить на элементах повехносного монтажа, которые наиболее маленькие и
надежные, так как не имеют выступов. Мобильное устройство получает питание от батарей, поэтому
не получиться изготовить целостное устройство, которе будет зацепляться на мочке ухе. Необходимо
разделить датчик пульса и модуль программировани обработки пульса. Можно изготовить
устройство следующим образом – микроконтроллерный блок в защитном корпусе с батареями
питания изготовить в виде навесного брелка, который можно повесить на уровне груди и от него
вывести провод для датчика пульса, закрепленного на мочке уха.
ЛИТЕРАТУРА
1 http://arduino.cc/
2 http://ru.wikipedia.org/wiki/сон
3 http://www.labcenter.com/index.cfm
4 Медведев А., Хилинский В. Программирование PIC-микроконтроллеров в PROTON+IDEна PicBasik
– Интерактивное учебное пособие – Москва: 2012-412 с.
5 Чак Х. Программирование PIC- микроконтроллеров на PicBasic – Додэка XXI: 2007 –234 c.
REFERENCES
1. http://arduino.cc/
2. http://ru.wikipedia.org/wiki/сон
3. http://www.labcenter.com/index.cfm
4. Medvedev A., Hilinskyi V. Programirivanie PIC-microcontrollerov v PROTON+IDE na PicBasik
– Interaktivnoye uchebnoe posobie – Moskva: 2012-412 с.
5. Chak H. Programirivanie PIC-microcontrollerov na PicBasic – Dodeka XXI: 2007 –234 c.
Өтебаев Р.М., Колтун Н.А., Кулмахамбетов А., Сарсенов Б.
ИҚ датчик пен PIC микроконтроллер негізінде жұмыс істейтін адам пульсін өлшейтін құрылғыны
компьютерде модельдеу
Түйіндеме.Мақалада адамның пульсін өлшеп жəне пульстің деңгеиі белгіленген мөлшерден төмендеп
кеткен жағдайда хабар беретін құрылғыны жасаудың тəсілі көрсетілген. Құрылғының компьютерлік моделі мен
оның зертханалық нұсқасы көрсетілген. Пульсометрдің сигналдарын өңдеу үшін əмбебап микроконтроллерлік
платаны қолдану ұсынылады.
Негізгі сөздер:
пульсті өлшеу, компьютерлік модельдеу, микроконтроллер.
Утебаев Р.М., Колтун Н.А., Кулмахамбетов А., Сарсенов Б.
Компьютерное моделирование работы пульсометра на базе ИК датчика и PIC микроконтроллера
Резюме.
В статье рассматривается метод изготовления прибора для считывания человеческого пульса с
целью оповещения при снижении пульса ниже заданного уровня. Описана разработанная компьютерная модель
и созданный на базе этой модели прибор.
Ключевые слова:
измерение пульса, компьютерное моделирование, микроконтроллер.
Utebaev R.M., Koltun N.A., Kulmuhambetov A., Sarsenov B.
Computer modeling of the heart rate based on the IR sensor and PIC microcontroller
Summary.
In this paper the method of manufacturing the device for reading the human heart in order to
publicize at lower pulse below a specified level. The developed computer model and created on the basis of this model
unit. Proposed to use a generic microcontroller board for cultivate signal processing heart rate.
Key words:
heart rate measurement, computer modeling, microcontroller.
● Технические науки
№5 2014 Вестник КазНТУ
180
УДК 004.75
R.K.Uskenbayeva
1
, B.K.Kurmangaliyeva
1
, Zh.B.Kalpeyeva
2
,
N.K.Mukhazhanov
1
, D.K.Kozhamzharova
2
(
1
International University of Information Technologies, Almaty, Kazakhstan,
2
Kazakh National Technical University named after K.I.Satpayev, Almaty, Kazakhstan)
DISTRIBUTED DATA PROCESSING IN HETEROGENEOUS CLOUD ENVIRONMENTS
Summary.
Presented work focuses on the characteristics of the distributed computing in heterogeneous cloud
environments using the technology of Hadoop MapReduce. Given the practical example of data processing and
analysis using these technologies. Scope of MapReduce and Hadoop technologies are diverse and covers almost all
sectors of industry and business, where is needed to access large data, often unstructured. In such situations,
conventional relational DBMSs do not cope with the processing and analysis large amounts of data.
Key words:
cloud computing, BigData, MapReduce, Hadoop, key-value, data analysis, processing of
unstructured data
Introduction.
At present, with the development of cloud computing paradigm, which involves the use
of a large number of processors working in parallel for solving computational problems have developed in
technology to manage large amount of data. One of such instruments for distributed data processing is
MapReduce (MR). MR is attractive to many programmers as a simple model, based on which users can build
relatively sophisticated distributed programs.
The present work focuses on the implementation features of the distributed computing in
heterogeneous cloud environments using the technology of Hadoop MapReduce.
Scope of MapReduce and Hadoop technologies are diverse and covers almost all sectors of industry
and business, where is needed to access large data, often unstructured. In such situations, conventional
relational DBMSs do not cope with the processing and analysis large amounts of data. At the same time, the
possibility to perform the necessary calculations quickly and scalability is a necessary condition for
successful research. For efficient processing of large amounts of data in 2004 , Google has developed a
distributed computing model called Map Reduce [1]. Examples of successful implementation of this
technology are given in detail in [2, 3].
The concept of MapReduce (cloud computing).
The programming model MapReduce is intended
for distributed processing tasks on a cluster of servers, created by Google Company [1], and the first
implementation of this model on the bases of a distributed file system GFS (Google File System) was made
there[2]. This implementation is widely used in software products, mostly Google, but is proprietary and is
not available for external use [4].
Now therefore, MapReduce (MR) is a paradigm of performance of distributed computing for large
amount of data [5].
According to this concept, the problem of handling large amounts of data is decomposed into two
phases - map and reduce.
The map (ƒ, j) phase takes a function ƒ and a list of j. Returns a list of the output that results from
applying the function ƒ to each element of the input list j. Map-processes run on the subsets of input data and
is executed independently of each other (Fig. 1).
Fig. 1.
The Мap Phase
● Техникалыќ єылымдар
ЌазЎТУ хабаршысы №5 2014
181
The reduce (ƒ, j) phase takes a function ƒ and a list of j. Returns an object formed by the aggregation of
input data j through the function ƒ. Reduce-processes are process the map-phase processes, smashing them on
key values into non-overlapping blocks, which also allows implementing them independently (Fig. 2).
Thus, each of the phases can be processed simultaneously on an arbitrary number of servers, which
are pre-defined.
Fig. 2.
The Reduce Phase
The architecture of Apache
Hadoop. The real popularity of MapReduce technology has brought an
open and accessible (an open source) implementation, which was made in the project Hadoop [6] by the
Apache community. The widespread use of Hadoop MapReduce in various research and scientific projects
brings undoubted benefits of this system, stimulating developers to its continuous improvement.
Hadoop MapReduce - the programming model (framework) of performance of distributed computing
for the large amounts of data within the paradigm of map / reduce, which is a set of Java-classes and
executable utilities for creating and processing tasks on parallel processing [5]. Hadoop also allows you to
specify a map and reduce implementations of arbitrary programs. Interaction between Hadoop and the
program can be implemented using standard input and output streams.
Hadoop platform consists of several elements. In the architecture of Hadoop is a distributed file
system Hadoop Distributed File System (HDFS), which distributes files across multiple storage nodes in the
cluster Hadoop (Fig. 3). Above the HDFS file system is the mechanism of MapReduce, consisting of nodes
types JobTracker and TaskTracker. To understand the operation of Hadoop in this section, we give a brief
description of each of these elements.
Hadoop Distributed File System (HDFS) - a distributed file system designed to store a very large
amount of data (terabytes or even petabytes) and provide high-speed access to the information[7]. All files
stored in HDFS are divided into a series of blocks of fixed size, constituting a default 64MB. To ensure the
reliability of copy blocks (replica) are stored on multiple servers, as the default in three. The block size and
the number of replicas (eg, replication factor) can be set individually for each file. HDFS is very similar to
the GFS architecture type of "master-slave". The main server is called the NameNode, and slave servers -
DataNode [3].
Fig. 3.
Architecture of Hadoop
NameNode type of node exists in a single copy and acts as a metadata services of HDFS, and the
nodes of type DataNode, serve as storage units of HDFS. Hadoop cluster node contains a single type of
NameNode and hundreds or thousands of nodes of type DataNode.
● Технические науки
№5 2014 Вестник КазНТУ
182
The actual Input / Output operations do not apply to the node NameNode - through this unit passes
only the metadata of the comparisons between the sites type of DataNode and file blocks. When an external
client sends a request to create a file, node NameNode responds to it by sending back the identification data
file block and node IP-address DataNode, which will hold the first copy of the block. Also NameNode
informs the nodes of DataNode, which will receive a copy of the file block. NameNode receives periodic
status messages (so called, heartbeat-messages) from each of DataNode. If DataNode cannot send a status
message, the node NameNode can take corrective action to replicate blocks located on the failed node to
other nodes in the cluster. Similar actions are carried out in the event of a drive failure on datanode-server,
damage to individual replicas or increase the replication factor file.
In the current implementation of HDFS master node is a "weak point" of the system. When a
NameNode node fails, system requires manual intervention to which the system becomes inoperable.
Automatic restart of NameNode and its migration to another system is not implemented yet.
To implement the calculations in Hadoop is used "master-worker" architecture. Unlike Google
MapReduce, the system has a dedicated control process (the so-called JobTracker) and a lot of work
processes (eg, TaskTracker), which carry out all the users tasks. JobTracker accepts jobs from the
applications that splits them into map-reduce-tasks and allocates tasks to work processes, tracks the
performance of the tasks and executes their restart. TaskTracker requests tasks from the host process,
uploads code and executes the task, notifies the control process on the status of tasks and provides access to
intermediate data of map-tasks. Processes interact with the RPC-calls, all calls go in the direction of the
worker to the process manager in order to reduce its dependence on the state of the workflow.
The practical implementation of distributed data processing in Hadoop environment.
This
section describes the practical experience of handling big amount of data using the paradigm of MapReduce.
For distributed computing we organized a cluster of five machines, each of which run on two virtual
machines with pre-installed Apache Hadoop. As an example, to execute experiment, we took the task of
processing unstructured data about the applicants of the university. In this article we consider the problem of
counting the number of grants allocated by majors. The algorithm consists of several steps:
1. As an initial step, Map-function is performed to each element of the source collection. Map will
return zero or create instances of Key / Value objects. Duty of the Map-function is to convert the elements of
the original collection to zero or more instances of Key / Value objects.
2. The next step, the algorithm will sort all pairs of Key / Value and create new instances of objects,
where all the values will be grouped by key.
3. The final step will execute the function Reduce - for each clustered instance of Key / Value object.
In conclusion, the Reduce function returns a new instance of the object to be included in the resulting
collection.
Fig. 4.
The scheme of MapReduce programming model
Listing 1 provides an implementation of Map-functions.
Listing 1. Implementation of Map-functions
● Техникалыќ єылымдар
ЌазЎТУ хабаршысы №5 2014
183
Next implementation is Reduce-function (see Listing 2).
Listing 2. Implementation of Reduce-functions
Finally, the results are collected (see Listing 3).
Listing 3. Collecting the results.
The results obtained during the experiment of processing the input data is reflected as result in the
Figure 5.
● Технические науки
№5 2014 Вестник КазНТУ
184
Fig. 5.
The processing result of MapReduce functions
Conclusion
In this article, we have given only a small sample of data analysis , briefly touched upon the Hadoop ‘s
possibilities, without delving into the study of all the benefits of its infrastructure. But even from this small
case study one can see that Hadoop greatly simplifies the analysis of the data , allowing you to work with a
distributed set of cluster’s nodes. Despite the fact that the original implementation of the Hadoop technology
is proprietary developments, due to open source projects their public counterparts are actively developing.
Thanks to the Hadoop distributed processing and analysis of data have become available not only for giants
like Google and Yahoo, but to all the ordinary users. Also these technologies that came out from the business
, are beginning to be used in the academic world , as modern science and research problems often have the
same requirements to computing resources that the problems of big companies.
In the future we plan to fully explore and apply the capabilities of these technologies to the needs of
the academic community.
REFERENCES
1. Jeffrey D. and Sanjay G. MapReduce: Simplified Data Processing on Large Clusters. Google, Inc., 2004.
2. Stonebraker M., Abadi D., DeWitt D. J. et al. MapReduce and parallel DBMSs: friends or foes? // Commun.
ACM. 2010. - Vol. 53, no. 1. Pp. 64–71. Retrieved from: http://doi.acm.org/10.1145/1629175.1629197. Access Date:
13.05.2014 year
3. Chalk L. Hadoop in action. - М.: DKM Press, 2012. – 424p.
4. Kuznecov S. Map Reduce: inside, outside, or from the side of the parallel databases? Retrieved from:
http://citforum.ru/database/articles/dw_appliance_and_mr/6.shtml#ref24. Access Date: 03.09.2013 year.
5. Petukhov D. Hadoop MapReduce. The basic concept and architecture. Retrieved from:
http://www.codeinstinct.pro/2012/08/mapreduce-design.html. Access Date: 03.09.2013 year.
6. Apache Hadoop Home Page. Retrieved from: http://hadoop.apache.org/ Access Date: 28.08.2013 year
7. Sukhoroslov O.V. New technologies for distributed storage and processing of large data sets // All Russian-
competitive selection of an overview and analytical articles on priority area "Information-telecommunication systems",
2008-40 p.
Өскенбаева Р.Қ., Курмангалиева Б.К., Кальпеева Ж.Б., Мұқажанов Н.К., Қожамжарова Д.Қ.
40> Достарыңызбен бөлісу: |