Defense in-depth
Information security must protect information throughout the life span of the
information, from the initial creation of the information on through to the final
disposal of the information. The information must be protected while in motion and
while at rest. During its lifetime, information may pass through many different
information processing systems and through many different parts of information
processing systems. There are many different ways the information and information
systems can be threatened. To fully protect the information during its lifetime, each
component of the information processing system must have its own protection
mechanisms. The building up, layering2 on and overlapping3 of security measures is
called defense in depth. The strength of any system is no greater than its weakest link.
Using a defence in-depth strategy, should one defensive measure fail, there are other
defensive measures in place that continue to provide protection.
The three types of the above mentioned controls (administrative, logical, and
physical) can be used to form the basis upon which to build a defense-in-depth
strategy. With this approach, defense-in-depth can be conceptualized as three distinct
layers or planes laid one on top of the other. Additional insight into defense-in- depth
can be gained by thinking of it as forming the layers of an onion, with data at the core
of the onion, people the next outer layer of the onion, and network security, hostbased
security and application security forming the outermost layers of the onion. Both
perspectives are equally valid and each provides valuable insight into the
327
implementation of a good defense-in-depth strategy. 26 Security classification for
information. An important aspect of information security and risk management is
recognizing the value of information and defining appropriate procedures and
protection requirements for the information. Not all information is equal and so not
all information requires the same degree of protection. This requires information to
be assigned a security classification.
The first step in information classification is to identify a member of senior
management as the owner of the particular information to be classified. Next, develop
a classification policy. The policy should describe the different classification labels,
define the criteria for information to be assigned a particular label, and list the
required security controls for each classification.
Some factors that influence which classification information should be
assigned include how much value that information has to the organization, how old
the information is and whether or not the information has become obsolete. Laws and
other regulatory requirements are also important considerations when classifying
information.
The type of information security classification labels selected and used will
depend on the nature of the organization, with examples being:
In the business sector, labels such as: Public, Sensitive, Private, Confidential.
In the government sector, labels such as: Unclassified, Sensitive But
Unclassified, Restricted, Confidential, Secret, Top Secret and their non-English
equivalents.
In cross-sectoral formations, the Traffic Light Protocol, which consists of:
White, Green, Amber and Red. All employees in the organization, as well as business
partners, must be trained on the classification schema and understand the required
security controls and handling procedures for each classification. The classification of
a particular information asset has been assigned should be reviewed periodically to
ensure the classification is still appropriate for the information and to ensure the
security controls required by the classification are in place.
Access control. Access to protected information must be restricted to people
who are authorized to access the information. The computer programs, and in many
cases the computers that process the information, must also be authorized. This
requires that mechanisms be in place to control the access to protected information.
The sophistication of the access control mechanisms should be in parity with the
value of the information being protected – the more sensitive or valuable the
information the stronger the control mechanisms need to be. The foundation, on
which access control mechanisms are built, start with identification4 and
authentication5 .
Identification is an assertion of who someone is or what something is. If a
person makes the statement "Hello, my name is John Doe" they are making a claim of
who they are. However, their claim may or may not be true. Before John Doe can be
granted access to protected information it will be necessary to verify that the person
claiming to be John Doe really is John Doe. 27
Authentication is the act of verifying a claim of identity. When John Doe goes
into a bank to make a withdrawal, he tells the bank teller he is John Doe (a claim of
328
identity). The bank teller asks to see a photo ID, so he hands the teller his driver's
license. The bank teller checks the license to make sure it has John Doe printed on it
and compares the photograph on the license against the person claiming to be John
Doe. If the photo and name match the person, then the teller has authenticated that
John Doe is who he claimed to be.
There are three different types of information that can be used for
authentication: something you know, something you have, or something you are.
Examples of something you know include such things as a PIN, a password, or your
mother's maiden name. Examples of something you have include a driver's license or
a magnetic Something you are refers to biometrics. Examples of biometrics include
palm prints, finger prints, voice prints and retina (eye) scans. Strong authentication
requires providing information from two of the three different types of authentication
information. For example, something you know plus something you have. This is
called two factor authentication.
On computer systems in use today, the Username is the most common form of
identification and the Password is the most common form of authentication.
Usernames and passwords have served their purpose but in our modern world they
are no longer adequate. Usernames and passwords are slowly being replaced with
more sophisticated authentication mechanisms.
After a person, program or computer has successfully been identified and
authenticated then it must be determined what informational resources they are
permitted to access and what actions they will be allowed to perform (run, view,
create, delete, or change). This is called authorization6 .
Authorization to access information and other computing services begins with
administrative policies and procedures. The policies prescribe what information and
computing services can be accessed, by whom, and under what conditions. The
access control mechanisms are then configured to enforce these policies.
Different computing systems are equipped with different kinds of access
control mechanisms - some may even offer a choice of different access control
mechanisms. The access control mechanism a system offers will be based upon one
of three approaches to access control or it may be derived from a combination of the
three approaches.
The non-discretionary7 approach consolidates all access control under a
centralized administration. The access to information and other resources is usually
based on the individuals function (role) in the organization or the tasks the individual
must perform. The discretionary8 approach gives the creator or owner of the
information resource the ability to control access to those resources. In the Mandatory
access control9 approach, access is granted or denied basing upon the security
classification assigned to the information resource.
Digital mapping
Digital mapping (also called digital cartography) is the process by which a
collection of data is compiled and formatted into a virtual image. The primary
function of this technology is to produce maps that give accurate representations of a
329
particular area, detailing major road arteries and other points of interest. The
technology also allows the calculation of distances from one place to another. Though
digital mapping can be found in a variety of computer applications, such as Google
50 Earth, the main use of these maps is with the Global Positioning System, or GPS
satellite network, used in standard automotive navigation systems.
History. The roots of digital mapping lie within traditional paper maps. Paper
maps provide basic landscapes similar to digitized road maps, yet are often
cumbersome, cover only a designated area, and lack many specific details such as
road blocks. In addition, there is no way to “update” a paper map except to obtain a
new version. On the other hand, digital maps, in many cases, can be updated through
synchronization with updates from company servers. Early digital maps had the same
basic functionality as paper maps – that is, they provided a “virtual view” of roads
generally outlined by the terrain encompassing the surrounding area. However, as
digital maps have grown with the expansion of GPS technology in the past decade,
live traffic updates, points of interest and service locations have been added to
enhance digital maps to be more “user conscious”. Traditional “virtual views” are
now only part of digital mapping. In many cases, users can choose between virtual
maps, satellite (aerial views), and hybrid (a combination of virtual map and aerial
views) views. With the ability to update and expand digital mapping devices, newly
constructed roads and places can be added to appear on maps.
Data Collection. Digital maps heavily rely upon a vast amount of data collected
over time. Most of the information that comprise digital maps is the culmination of
satellite imagery1 as well as street level information. Maps must be updated
frequently to provide users with the most accurate reflection of a location. While
there is a wide spectrum on companies that specialize in digital mapping, the basic
premise is that digital maps will accurately portray roads as they actually appear to
give "life-like experiences2 ".
Functionality and Use. Computer programs and applications such as Google
Earth and Google Maps provide map views from space and street level of much of
the world. Used primarily for recreational use, Google Earth provides digital mapping
in personal applications, such as tracking distances or finding locations. The
development of mobile computing (tablet PCs3 , laptops, etc.) has recently (since
about 2000) spurred the use of digital mapping in the sciences and applied sciences.
As of 2009, science fields that use digital mapping technology include geology,
engineering, architecture, land surveying, mining, forestry, environment, and
archaeology. The principal use by which digital mapping has grown in the past
decade has been its connection to Global Positioning System (GPS) technology. GPS
is the foundation behind digital mapping navigation systems. The coordinates and
position as well as atomic time obtained by a terrestrial GPS receiver from GPS
satellites orbiting the Earth interact together to provide the digital mapping
programming with points of origin in addition to the destination points needed to
calculate distance. This information is then analyzed and compiled to create a map
that provides the easiest and most efficient way to reach a destination. More
technically speaking, the device operates in the following manner: GPS receivers
collect data from "at least twenty-four GPS satellites" orbiting the Earth, calculating
330
position in three dimensions.
1. The GPS receiver then utilizes position to provide GPS coordinates, or exact
points of latitudinal and longitudinal direction from GPS satellites.
2. The points, or coordinates, output an accurate range between approximately
"10-20 meters" of the actual location.
3. The beginning point, entered via GPS coordinates, and the ending point,
(address or coordinates) input by the user, are then entered into the digital map.
4. The map outputs a real-time visual representation of the route. The map
then moves along the path of the driver.
5. If the driver drifts from the designated route, the navigation system will use
the current coordinates to recalculate a route to the destination location.
Computers
Generally, any device that can perform numerical calculations, even an adding
machine, may be called a computer but nowadays this term is used especially for
digital computers. Computers that once weighed 30 tons now may weigh as little as
1.8 kilograms. Microchips and microprocessors have considerably reduced the cost of
the electronic components required in a computer. Computers come in many sizes
and shapes such as special-purpose, laptop, desktop, minicomputers, supercomputers.
Special-purpose computers can perform specific tasks and their operations are
limited to the programmes built into their microchips. There computers are the basis
for electronic calculators and can be found in thousands of electronic products,
including digital watches and automobiles. Basically, these computers do the
ordinary arithmetic operations such as addition, subtraction, multiplication and
division.
General-purpose computers are much more powerful because they can accept
new sets of instructions. The smallest fully functional computers are called laptop
computers. Most of the general-purpose computers known as personal or desktop
computers can perform almost 5 million operations per second.
Today's personal computers are known to be used for different purposes: for
testing new theories or models that cannot be examined with experiments, as valuable
educational tools due to various encyclopedias, dictionaries, educational programmes,
in book-keeping, accounting and management. Proper application of computing
equipment in different industries is likely to result in proper management, effective
distribution of materials and resources, more efficient production and trade.
Minicomputers are high-speed computers that have greater data manipulating
capabilities than personal computers do and that can be used simultaneously by many
users. These machines are primarily used by larger businesses or by large research
and university centers. The speed and power of supercomputers, the highest class of
computers, are almost beyond comprehension, and their capabilities are continually
being improved. The most complex of these machines can perform nearly 32 billion
calculations per second and store 1 billion characters in memory at one time, and can
do in one hour what a desktop computer would take 40 years to do. They are used
commonly by government agencies and large research centers. Linking together
331
networks of several small computer centers and programming them to use a common
language has enabled engineers to create the supercomputer. The aim of this
technology is to elaborate a machine that could perform a trillion calculations per
second.
Digital computers
There are two fundamentally different types of computers: analog and digital.
The former type solver problems by using continuously changing data such as
voltage. In current usage, the term "computer" usually refers to high-speed digital
computers. These computers are playing an increasing role in all branches of the
economy.
Digital computers based on manipulating discrete binary digits (1s and 0s).
They are generally more effective than analog computers for four principal reasons:
they are faster; they are not so susceptible to signal interference; they can transfer
huge data bases more accurately; and their coded binary data are easier to store and
retrieve than the analog signals.
For all their apparent complexity, digital computers are considered to be simple
machines. Digital computers are able to recognize only two states in each of its
millions of switches, "on" or "off", or high voltage or low voltage. By assigning
binary numbers to there states, 1 for "on" and 0 for "off", and linking many switches
together, a computer can represent any type of data from numbers to letters and
musical notes. It is this process of recognizing signals that is known as digitization.
The real power of a computer depends on the speed with which it checks switches per
second. The more switches a computer checks in each cycle, the more data it can
recognize at one time and the faster it can operate, each switch being called a binary
digit or bit.
A digital computer is a complex system of four functionally different elements:
1) the central processing unit (CPU), 2) input devices, 3) memory-storage devices
called disk drives, 4) output devices. These physical parts and all their physical
components are called hardware.
The power of computers greatly on the characteristics of memory-storage
devices. Most digital computers store data both internally, in what is called main
memory, and externally, on auxiliary storage units. As a computer processes data and
instructions, it temporarily stores information internally on special memory
microchips. Auxiliary storage units supplement the main memory when programmes
are too large and they also offer a more reliable method for storing data. There exist
different kinds of auxiliary storage devices, removable magnetic disks being the most
widely used. They can store up to 100 megabytes of data on one disk, a byte being
known as the basic unit of data storage.
Output devices let the user see the results of the computer's data processing.
Being the most commonly used output device, the monitor accepts video signals from
a computer and shows different kinds of information such as text, formulas and
graphics on its screen. With the help of various printers information stored in one of
the computer's memory systems can be easily printed on paper in a desired number of
332
copies.
Programmes, also called software, are detailed sequences of instructions that
direct the computer hardware to perform useful operations. Due to a computer's
operating system hardware and software systems can work simultaneously. An
operating system consists of a number of programmes coordinating operations,
translating the data from different input and output devices, regulating data storage in
memory, transferring tasks to different processors, and providing functions that help
programmers to write software. In large corporations software is often written by
groups of experienced programmers, each person focusing on a specific aspect of the
total project. For this reason, scientific and industrial software sometimes costs much
more than do the computers on which the programmes run.
The first hackers
(1) The first "hackers" were students at the Massachusetts Institute of
Technology (MIT) who belonged to the TMRC (Tech Model Railroad Club). Some
of the members really built model trains. But many were more interested in the wires
and circuits underneath the track platform. Spending hours at TMRC creating better
circuitry was called "a mere hack." Those members who were interested in creating
innovative, stylistic, and technically clever circuits called themselves (with pride)
hackers.
(2) During the spring of 1959, a new course was offered at MIT, a freshman
programming class. Soon the hackers of the railroad club were spending days, hours,
and nights hacking away at their computer, an IBM 704. Instead of creating a better
circuit, their hack became creating faster, more efficient program - with the least
number of lines of code. Eventually they formed a group and created the first
set of hacker's rules, called the Hacker's Ethic.
(3) Steven Levy, in his book Hackers, presented the rules:
Rule 1: Access to computers - and anything, which might teach you, something
about the way the world works - should be unlimited and total.
Rule 2: All information should be free.
Rule 3: Mistrust authority - promote decentralization.
Rule 4: Hackers should be judged by their hacking, not bogus criteria such as
degrees, race, or position.
Rule 5: You can create art and beauty on a computer.
Rule 6: Computers can change your life for the better.
(4) These rules made programming at MIT's Artificial Intelligence Laboratory
a challenging, all encompassing endeavor. Just for the exhilaration of programming,
students in the Al Lab would write a new program to perform even the smallest tasks.
The program would be made available to others who would try to perform the same
task with fewer instructions. The act of making the computer work more elegantly
was, to a bonafide hacker, awe-inspiring.
(5) Hackers were given free reign on the computer by two AI Lab professors,
"Uncle" John McCarthy and Marvin Minsky, who realized that hacking created new
insights. Over the years, the AI Lab created many innovations: LIFE, a game about
333
survival; LISP, a new kind of programming language; the first computer chess game;
The CAVE, the first computer adventure; and SPACEWAR, the first video game.
Достарыңызбен бөлісу: |