Information security
Information security is the process of protecting the availability, privacy, and
integrity of data. While the term often describes measures and methods of increasing
computer security, it also refers to the protection of any type of important data, such
as personal diaries or the classified plot details of an upcoming book. No security
system is foolproof, but taking basic and practical steps to protect data is critical for
good information security.
Password Protection
Using passwords is one of the most basic methods of improving information
security. This measure reduces the number of people who have easy access to the
information, since only those with approved codes can reach it. Unfortunately,
passwords are not foolproof, and hacking programs can run through millions of
possible codes in just seconds. Passwords can also be breached through carelessness,
such as by leaving a public computer logged into an account or using a too simple
code, like "password" or "1234."
To make access as secure as possible, users should create passwords that use a
mix of upper and lowercase letters, numbers, and symbols, and avoid easily guessed
combinations such as birthdays or family names. People should not write down
passwords on papers left near the computer, and should use different passwords for
each account. For better security, a computer user may want to consider switching to
a new password every few months.
Antivirus and Malware Protection
One way that hackers gain access to secure information is through malware,
which includes computer viruses, spyware, worms, and other programs. These pieces
of code are installed on computers to steal information, limit usability, record user
actions, or destroy data. Using strong antivirus software is one of the best ways of
improvinginformation security. Antivirus programs scan the system to check for any
known malicious software, and most will warn the user if he or she is on a webpage
that contains a potential virus. Most programs will also perform a scan of the entire
system on command, identifying and destroying any harmful objects.
Most operating systems include a basic antivirus program that will help protect
the computer to some degree. The most secure programs are typically those available
for a monthly subscription or one-time fee, and which can be downloaded online or
purchased in a store. Antivirus software can also be downloaded for free online,
although these programs may offer fewer features and less protection than paid
versions.
Even the best antivirus programs usually need to be updated regularly to keep
up with the new malware, and most software will alert the user when a new update is
available for downloading. Users must be aware of the name and contact method of
each anti-virus program they own, however, as some viruses will pose as security
programs in order to get an unsuspecting user to download and install more malware.
Running a full computer scan on a weekly basis is a good way to weed out potentially
342
malicious programs.
Firewalls
A firewall helps maintain computer information security by preventing
unauthorized access to a network. There are several ways to do this, including by
limiting the types of data allowed in and out of the network, re-routing network
information through a proxyserver to hide the real address of the computer, or by
monitoring the characteristics of the data to determine if it's trustworthy. In essence,
firewalls filter the information that passes through them, only allowing authorized
content in. Specific websites, protocols (like File Transfer Protocol or FTP), and even
words can be blocked from coming in, as can outside access to computers within the
firewall.
Most computer operating systems include a pre-installed firewall program, but
independent programs can also be purchased for additional security options. Together
with an antivirus package, firewalls significantly increase information security by
reducing the chance that a hacker will gain access to private data. Without a firewall,
secure data is more vulnerable to attack.
Codes and Cyphers
Encoding data is one of the oldest ways of securing written information.
Governments and military organizations often use encryption systems to ensure that
secret messages will be unreadable if they are intercepted by the wrong person.
Encryption methods can include simple substitution codes, like switching each letter
for a corresponding number, or more complex systems that require complicated
algorithms for decryption. As long as the code method is kept secret, encryption can
be a good basic method of information security.
On computers systems, there are a number of ways to encrypt data to make it
more secure. With a symmetric key system, only the sender and the receiver have the
code that allows the data to be read. Public or asymmetric key encryption involves
using two keys — one that is publicly available so that anyone can encrypt data with
it, and one that is private, so only the person with that key can read the data that has
been encoded. Secure socket layers use digital certificates, which confirm that the
connected computers are who they say they are, and both symmetric and asymmetric
keys to encrypt the information being passed between computers.
Legal Liability
Businesses and industries can also maintain information security by using
privacy laws. Workers at a company that handles secure data may be required to sign
non-disclosure agreements (NDAs), which forbid them from revealing or discussing
any classified topics. If an employee attempts to give or sell secrets to a competitor or
other unapproved source, the company can use the NDA as grounds for legal
proceedings. The use of liability laws can help companies preserve their trademarks,
internal processes, and research with some degree of reliability.
Training and Common Sense
One of the greatest dangers to computer data security is human error or
ignorance. Those responsible for using or running a computer network must be
carefully trained in order to avoid accidentally opening the system to hackers. In the
workplace, creating a training program that includes information on existing security
343
measures as well as permitted and prohibited computer usage can reduce breaches in
internal security. Family members on a home network should be taught about running
virus scans, identifying potential Internet threats, and protecting personal information
online.
In business and personal behavior, the importance of maintaining information
securitythrough caution and common sense cannot be understated. A person who
gives out personal information, such as a home address or telephone number, without
considering the consequences may quickly find himself the victim of scams, spam,
and identity theft. Likewise, a business that doesn't establish a strong chain of
command for keeping data secure, or provides inadequate security training for
workers, creates an unstable security system. By taking the time to ensure that data is
handed out carefully and to reputable sources, the risk of a security breach can be
significantly reduced.
Information security
Cyber terrorists are a fearsome lot, more dangerous every day. As companies
try to buttress their security walls, they're falling short of professionals.
Recently, the number of internet-based security attacks have mounted
dangerously. According to CERT/CC, the internet security research centre at
Carnegie Mellon University, USA, the number of security incidents reported have
increased to an alarming 137,529 in '03 — compared to 82,094 in '02 and a mere
1,334 a decade ago.
Despite its importance, businesses across the world have paid only lip service
to information security. Until recently, companies, especially in developing countries
like India, made no allowance in their budget forinformation security and did not
consider it as mission critical. However, a recent spate of security intrusions,
malicious software such as viruses and denial of service attacks on corporate
websites, like the recent ones on Microsoft and SCO by MyDoom, have changed the
mind set of Indian businesses.
As organisations continue to deploy mission critical network centric
information systems, managing the security of such systems has become critical. For
example, a recent Economic Times-CIO survey reported that organisations spend up
to 16.7% of their budget on information security, next only to their spending on
enterprise systems.
Companies like Mahindra & Mahindra and ICICI have full-fledged teams
working on deployment and maintenance of information security infrastructure. Not
just businesses, governments too are concerned about information security. The US
federal government retains more than 10,000 employees classified as computer
security professionals, far more than the number present two years ago, to manage its
security infrastructure. Of late, even the business process outsourcing (BPO) industry
in India has begun to look at information security to protect and ensure data privacy.
344
Computers
Nowadays, we cannot imagine our life without computers and the fact is that
they have become so important that nothing can replace them. They seem to be
everywhere today. Since 1948 when the first real computer has been invented our life
has changed so much that we can call it real digital revolution.
First computers differed from today's ones. They were so huge that they
occupied whole rooms or buildings being relatively slow. They were not faster than
modern simple watches or calculators. Nowadays they are also used by scientist and
they may also be as huge as the old ones but they are millions times faster. They can
perform many complex operations simultaneously and scientist practically can't do
without them. Thanks to them people has access to enormous amount of information.
Gathering data has never been more simple than now. They are not only used in
laboratories but also in factories to control production. Sometimes it is computers
who manufacture other computers.
But not only in science and industry computers are being used. Thanks to them
modern medicine can diagnose diseases faster and more thoroughly. Also in banking
system computers have become irreplaceable. They control ATMs, all data is stored
on special hard disks and paper isn't used in accountancy any more. Furthermore,
architects, designers and engineers can't imagine their work without computers. This
machines are really everywhere and we depend on them also in such fields as
criminology. They help police to solve crimes and collect evidence.
Moreover, computers are wide-spread in education. Except their classic tasks
such as administration and accountancy they are used in process of learning. Firstly,
they store enormous amount of data which helps students to gain an information.
Secondly, thanks to special teaching techniques and programs they improve ours
skills of concentration and assimilation of knowledge. They have become so popular
that not knowing how to use them means to be illiterate.
Of course except this superb features there is also dark side of computer
technology because every invention brings us not only benefits but also threats.
Advantages:
1. Computers saves storage place. Imagine how much paper would have to be
used, how many trees would have to be cut just to store information which is today
on hard disks. Data stored on just one CD in paper form would use room of dozens
square meters and would weight thousands of kilos. Nowadays techniques of
converting data from paper to digital form has also tremendously developed. You can
simply rewrite the text using a keyboard. If you are not good at it you can use a
scanner to scan necessary documents. At least there are special devices which can
transfer our voice into text. Thanks to computers banks, private and government
companies, libraries, and many other institutions can save millions of square meters
and billions of dollars. Nowadays we have access to billions of information and due
to the computer's capabilities we actually don't need to worry not only how to store
them but also how to process them.
2. Computers can calculate and process information faster and more accurate
than human. Sometimes there are false information in newspapers that due to the
345
computer's mistake something has failed. But it's not truth because machines cannot
make mistakes by it's own. Sometimes it's short circuit, other time it's hardware
problem but most often it is human mistake, someone who designed and wrote the
flawed computer program.
3. Computers improve our lives. They are very useful in office work, we can
write text such as reports and analysis. Compared with old typewriters when using
computers we don't have to worry about making mistakes in typewriting because
special programs helps as to avoid them and we can change them any time. When the
text is finished we can print it in as many copies as we want. At least but not at last,
we can communicate with whole world very fast and cheap using Internet.
4. Computers are user-friendly. We can watch videos and listen to the music
having only PC. We don't need video player, TV and stacking hi-fi any more.
Furthermore, we don't have to buy PC's which can take much room due to their other
necessary components and wires. We can always buy laptop or palm top which is
even smaller, and use them outside anywhere we want.
Disadvantages:
1. Computers are dangerous to our health. The monitors used to be dangerous
for our eyesight. Nowadays due to technological development they are very safe. But
there are other threats to our health than damaging our sight. Working with
computers and permanent looking on the monitor can cause epilepsy, especially with
children. Very often parents want to have a rest and don't draw enough attention to
how long their children use computer. This negative effects also concerns TV screen.
2. Computers sometimes brake down. The biggest problem is when our hard
disk brakes down because of the data stored on it. Other hardware is easily
replaceable. But there are many ways of avoiding consequences of loosing our data,
for example by saving it on CDs. Except hardware failures there are also software
ones. For example, for many years Windows Operating System was very unstable
and that's why many other OS were written. Now the most common are Linux,
Windows XP, MacOs (for Macintosh computers). Except of unstable OS another and
maybe the main threat to our data are computer viruses. There are billions of them
and every day new ones come into being. If you have the Internet connection you
have to be particularly careful and download anti-virus programs. Fortunately, there
are also many of them and most of them are freeware. You have to remember to
download updates.
3. Violence and sex. The main threat to younger users of computers are internet
pornography and bloody games. The presence of sexual content or level of violence
should be properly marked and parents are obliged to draw their attention to this
issue. There are many extremely bloody games such as "grand theft auto", "quake"
etc. For example, in GTA you are a member of mafia and to promote in crime
hierarchy you should kidnap people, steal cars, robe banks etc. As a bonus you can
also run over pedestrians. There are also many games in which you are a soldier and
your mission is to kill as many enemies as possible. The other threat to our children is
internet pornography. The availability of sexual content is enormous and you can do
practically nothing to protect your child, especially when it's interested in this matter.
4. The other threat is that you can be a computer addict. If you spend most of
346
your free time using computer you should go to see a psychologist.
However, I think that the situation is very serious. Computers are practically
irreplaceable and we can't make without them any more. They are everywhere, at our
homes, schools, at work, in our cars. It is very possible that the next stage of human
evolution is some kind of superb half human and half machines. On the other hand I
don't think it is the closest future. But the truth is that that computers will rule the
world sooner or later.
Algorithms and Applications
Humans perceive the three-dimensional structure of the world with apparent
ease. However, despite all of the recent advances in computer vision research, the
dream of having a computer interpret an image at the same level as a two-year old
remains elusive. Why is computer vision such a challenging problem and what is the
current state of the art?
Computer Vision: Algorithms and Applications explores the variety of
techniques commonly used to analyze and interpret images. It also describes
challenging real-world applications where vision is being successfully used, both for
specialized applications such as medical imaging, and for fun, consumer-level tasks
such as image editing and stitching, which students can apply to their own personal
photos and videos.
More than just a source of “recipes,” this exceptionally authoritative and
comprehensive textbook/reference also takes a scientific approach to basic vision
problems, formulating physical models of the imaging process before inverting them
to produce descriptions of a scene. These problems are also analyzed using statistical
models and solved using rigorous engineering techniques
Topics and features: structured to support active curricula and project-oriented
courses, with tips in the Introduction for using the book in a variety of customized
courses; presents exercises at the end of each chapter with a heavy emphasis on
testing algorithms and containing numerous suggestions for small mid-term projects;
provides additional material and more detailed mathematical topics in the
Appendices, which cover linear algebra, numerical techniques, and Bayesian
estimation theory; suggests additional reading at the end of each chapter, including
the latest research in each sub-field, in addition to a full Bibliography at the end of
the book; supplies supplementary course material for students at the associated
website, http://szeliski.org/Book/.
Suitable for an upper-level undergraduate or graduate-level course in computer
science or engineering, this textbook focuses on basic techniques that work under
real-world conditions and encourages students to push their creative boundaries. Its
design and exposition also make it eminently suitable as a unique reference to the
fundamental techniques and current research literature in computer vision.
Retrospective: An Axiomatic Basis for Computer Programming
By C.A.R. Hoare
347
Communications of the ACM, Vol. 52 No. 10, Pages 30-32
10.1145/1562764.1562779
Retrospective (1969–1999)
My first job (1960–1968) was in the computer industry; and my first major
project was to lead a team that implemented an early compiler for ALGOL 60. Our
compiler was directly structured on the syntax of the language, so elegantly and so
rigorously formalized as a context-free language. But the semantics of the language
was even more important, and that was left informal in the language definition. It
occurred to me that an elegant formalization might consist of a collection of axioms,
similar to those introduced by Euclid to formalize the science of land measurement.
My hope was to find axioms that would be strong enough to enable programmers to
discharge their responsibility to write correct and efficient programs. Yet I wanted
them to be weak enough to permit a variety of efficient implementation strategies,
suited to the particular characteristics of the widely varying hardware architectures
prevalent at the time.
I expected that research into the axiomatic method would occupy me for my
entire working life; and I expected that its results would not find widespread practical
application in industry until after I reached retirement age. These expectations led me
in 1968 to move from an industrial to an academic career. And when I retired in
1999, both the positive and the negative expectations had been entirely fulfilled.
The main attraction of the axiomatic method was its potential provision of an
objective criterion of the quality of a programming language, and the ease with which
programmers could use it. For this reason, I appealed to academic researchers
engaged in programming language design to help me in the research. The latest
response comes from hardware designers, who are using axioms in anger (and for the
same reasons as given above) to define the properties of modern multicore chips with
weak memory consistency.
One thing I got spectacularly wrong. I could see that programs were getting
larger, and I thought that testing would be an increasingly ineffective way of
removing errors from them. I did not realize that the success of tests is that they test
the programmer, not the program. Rigorous testing regimes rapidly persuade error-
prone programmers (like me) to remove themselves from the profession. Failure in
test immediately punishes any lapse in programming concentration, and (just as
important) the failure count enables implementers to resist management pressure for
premature delivery of unreliable code. The experience, judgment, and intuition of
programmers who have survived the rigors of testing are what make programs of the
present day useful, efficient, and (nearly) correct. Formal methods for achieving
correctness must support the intuitive judgment of programmers, not replace it.
My basic mistake was to set up proof in opposition to testing, where in fact
both of them are valuable and mutually supportive ways of accumulating evidence of
the correctness and serviceability of programs. As in other branches of engineering, it
is the responsibility of the individual software engineer to use all available and
practicable methods, in a combination adapted to the needs of a particular project,
product, client, or environment. The best contribution of the scientific researcher is to
348
extend and improve the methods available to the engineer, and to provide convincing
evidence of their range of applicability. Any more direct advocacy of personal
research results actually excites resistance from the engineer.
Progress (1999–2009)
On retirement from University, I accepted a job offer from Microsoft Research
in Cambridge (England). I was surprised to discover that assertions, sprinkled more
or less liberally in the program text, were used in development practice, not to prove
correctness of programs, but rather to help detect and diagnose programming errors.
They are evaluated at runtime during overnight tests, and indicate the occurrence of
any error as close as possible to the place in the program where it actually occurred.
The more expensive assertions were removed from customer code before delivery.
More recently, the use of assertions as contracts between one module of program and
another has been incorporated in Microsoft implementations of standard
programming languages. This is just one example of the use of formal methods in
debugging, long before it becomes possible to use them in proof of correctness.
I did not realize that the success of tests is that they test the programmer, not
the program.
In 1969, my proof rules for programs were devised to extract easily from a
well-asserted program the mathematical 'verification conditions', the proof of which
is required to establish program correctness. I expected that these conditions would
be proved by the reasoning methods of standard logic, on the basis of standard
axioms and theories of discrete mathematics. What has happened in recent years is
exactly the opposite of this, and even more interesting. New branches of applied
discrete mathematics have been developed to formalize the programming concepts
that have been introduced since 1969 into standard programming languages (for
example, objects, classes, heaps, pointers). New forms of algebra have been
discovered for application to distributed, concurrent, and communicating processes.
New forms of modal logic and abstract domains, with carefully restricted expressive
power, have been invented to simplify human and mechanical reasoning about
programs. They include the dynamic logic of actions, temporal logic, linear logic, and
separation logic. Some of these theories are now being reused in the study of
computational biology, genetics, and sociology.
Equally spectacular (and to me unexpected) progress has been made in the
automation of logical and mathematical proof. Part of this is due to Moore's Law.
Since 1969, we have seen steady exponential improvements in computer capacity,
speed, and cost, from megabytes to gigabytes, and from megahertz to gigahertz, and
from megabucks to kilobucks. There has been also at least a thousand-fold increase in
the efficiency of algorithms for proof discovery and counterexample (test case)
generation. Crudely multiplying these factors, a trillion-fold improvement has
brought us over a tipping point, at which it has become easier (and certainly more
reliable) for a researcher in verification to use the available proof tools than not to do
so. There is a prospect that the activities of a scientific user community will give back
to the tool-builders a wealth of experience, together with realistic experimental and
349
competition material, leading to yet further improvements of the tools.
For many years I used to speculate about the eventual way in which the results
of research into verification might reach practical application. A general belief was
that some accident or series of accidents involving loss of life, perhaps followed by
an expensive suit for damages, would persuade software managers to consider the
merits of program verification.
This never happened. When a bug occurred, like the one that crashed the
maiden flight of the Ariane V spacecraft in 1996, the first response of the manager
was to intensify the test regimes, on the reasonable grounds that if the erroneous code
had been exercised on test, it would have been easily corrected before launch. And if
the issue ever came to court, the defense of 'state-of-the-art' practice would always
prevail. It was clearly a mistake to try to frighten people into changing their ways. Far
more effective is the incentive of reduction in cost. A recent report from the U.S.
Department of Commerce has suggested that the cost of programming error to the
world economy is measured in tens of billions of dollars per year, most of it falling
(in small but frequent doses) on the users of software rather than on the producers.
The phenomenon that triggered interest in software verification from the
software industry was totally unpredicted and unpredictable. It was the attack of the
hacker, leading to an occasional shutdown of worldwide commercial activity, costing
an estimated $4 billion on each occasion. A hacker exploits vulnerabilities in code
that no reasonable test strategy could ever remove (perhaps by provoking race
conditions, or even bringing dead code cunningly to life). The only way to reach
these vulnerabilities is by automatic analysis of the text of the program itself. And it
is much cheaper, whenever possible, to base the analysis on mathematical proof,
rather than to deal individually with a flood of false alarms. In the interests of
security and safety, other industries (automobile, electronics, aerospace) are also
pioneering the use of formal tools for programming. There is now ample scope for
employment of formal methods researchers in applied industrial research.
Prospective
In 1969, I was afraid industrial research would dispose such vastly superior
resources that the academic researcher would be well advised to withdraw from
competition and move to a new area of research. But again, I was wrong. Pure
academic research and applied industrial research are complementary, and should be
pursued concurrently and in collaboration. The goal of industrial research is (and
should always be) to pluck the 'low-hanging fruit'; that is, to solve the easiest parts of
the most prevalent problems, in the particular circumstances of here and now. But the
goal of the pure research scientist is exactly the opposite: it is to construct the most
general theories, covering the widest possible range of phenomena, and to seek
certainty of knowledge that will endure for future generations. It is to avoid the
compromises so essential to engineering, and to seek ideals like accuracy of
measurement, purity of materials, and correctness of programs, far beyond the current
perceived needs of industry or popularity in the market-place. For this reason, it is
only scientific research that can prepare mankind for the unknown unknowns of the
forever uncertain future.
350
The phenomenon that triggered interest in software verification from the
software industry was totally unpredicted and unpredictable.
So I believe there is now a better scope than ever for pure research in computer
science. The research must be motivated by curiosity about the fundamental
principles of computer programming, and the desire to answer the basic questions
common to all branches of science: what does this program do; how does it work;
why does it work; and what is the evidence for believing the answers to all these
questions? We know in principle how to answer them. It is the specifications that
describes what a program does; it is assertions and other internal interface contracts
between component modules that explain how it works; it is programming language
semantics that explains why it works; and it is mathematical and logical proof,
nowadays constructed and checked by computer, that ensures mutual consistency of
specifications, interfaces, programs, and their implementations.
There are grounds for hope that progress in basic research will be much faster
than in the early days. I have already described the vastly broader theories that have
been proposed to understand the concepts of modern programming. I have welcomed
the enormous increase in the power of automated tools for proof. The remaining
opportunity and obligation for the scientist is to conduct convincing experiments, to
check whether the tools, and the theories on which they are based, are adequate to
cover the vast range of programs, design patterns, languages, and applications of
today's computers. Such experiments will often be the rational reengineering of
existing realistic applications. Experience gained in the experiments is expected to
lead to revisions and improvements in the tools, and in the theories on which the tools
were based. Scientific rivalry between experimenters and between tool builders can
thereby lead to an exponential growth in the capabilities of the tools and their fitness
to purpose. The knowledge and understanding gained in worldwide long-term
research will guide the evolution of sophisticated design automation tools for
software, to match the design automation tools routinely available to engineers of
other disciplines.
The End
No exponential growth can continue forever. I hope progress in verification
will not slow down until our programming theories and tools are adequate for all
existing applications of computers, and for supporting the continuing stream of
innovations that computers make possible in all aspects of modern life. By that time,
I hope the phenomenon of programming error will be reduced to insignificance:
computer programming will be recognized as the most reliable of engineering
disciplines, and computer programs will be considered the most reliable components
in any system that includes them.
Even then, verification will not be a panacea. Verification technology can only
work against errors that have been accurately specified, with as much accuracy and
attention to detail as all other aspects of the programming task. There will always be
a limit at which the engineer judges that the cost of such specification is greater than
the benefit that could be obtained from it; and that testing will be adequate for the
351
purpose, and cheaper. Finally, verification cannot protect against errors in the
specification itself. All these limits can be freely acknowledged by the scientist, with
no reduction in enthusiasm for pushing back the limits as far as they will go.
Достарыңызбен бөлісу: |