Thursday, 4 December 2014


3. Access Control
January-2004 [12]
3.a) Explain briefly about Mandatory Access Control and Discretionary Access Control.
Mandatory Access Control allows new access control modules to be loaded, implementing new security policies. Some provide protections of a narrow subset of the system, hardening a particular service
Discretionary access control means that each object has an owner and the owner of the object gets to choose its access control policy. There are loads of objects in Windows that use this security model, including printers, services, and file shares. All secure kernel objects also use this model, including processes, threads, memory sections, synchronization objects such as mutexes and events, and named pipes

b) Describe briefly the Bell-La Padula model and its limitations. [6]
The Bell-Lapadula model is designed to facilitate information sharing in a secure manner across information domains. Within the model a hierarchy of levels is used to determine appropriate access rights. For example, using conventional DND document labeling standards, SECRET is treated above CONFIDENTIAL. The Bell-Lapadula model uses axioms of “read-down” and “write-up”. Therefore, assuming appropriate need-to-know, an individual in a SECRET domain is authorized to “read-down” into the CONFIDENTIAL domain since personnel with sufficient clearance for SECRET are also cleared for CONFIDENTIAL. However, the user in the SECRET domain may never be authorized to “writedown”.
This occurs because the clearance in the CONFIDENTIAL domain is not sufficient to handle the SECRET information.
Similarly, an individual in a SECRET domain is not authorized to “read-up” from a TOP SECRET domain. This happens because the SECRET domain does not include a sufficient clearance. However, an individual in the SECRET domain may be authorized to “write-up” to the TOP SECRET domain. This happens as a result of the inherent ability for all personnel in the TOP SECRET domain to have sufficient clearance to read the lower domain information.
Limitations
Restricted to Confidentiality.
No policies for changing access rights; a complete general downgrade is secure; intended for systems with static security levels.
Contains covert channels: a low subject can detect the existence of high objects when it is denied access.
Sometimes, it is not sufficient to hide only the contents of objects. Their existence may have to be hidden, as well.

July-2004 [11]
1.e) What are access control lists and capability lists? In what ways they differ in their organization? [4]
A file attribute that contains the basic and extended permissions that control access to the file.
One way to partition the matrix is by rows. Thus we have all access rights of one user together. These are stored in a data structure called a capability list, which lists all the access rights or capabilities that a user has. The following are the capability lists for our example: Fred --> /dev/console(RW)--> fred/prog.c(RW)--> fred/letter(RW) --> /usr/ucb/vi(X) Jane --> /dev/console(RW)--> fred/prog.c(R)--> fred/letter() --> /usr/ucb/vi(X)
When a process tries to gain access to an object, the operating system can check the appropriate capability list
Comparing ACLs and Capabilities

Two topics come up repeatedly on the EROS mailing lists:
1. How are capabilities and access control lists different?
2. Is one better than the other? If so, why?
They come up so frequently because people use these questions as a way to sharpen their understanding of the issues.
One thing that I learned in my own attempts to answer these questions is that the arguments are actually quite complex, and that the first mistake is to oversimplify them when you are trying to get a handle on them. You can simplify the explanation later; first be sure what it is you are trying to explain.
Since it's very easy to miss important details, this note tries to give my current answer to the first question. It describes how capabilities and access control lists actually work in practice, and therefore how they differ. It may tell you more then you feel you wish to know, but it is as accurate as I can make it without appealing to mathematics.
I clearly have an opinion on the second question, or I wouldn't have built EROS. This note tries not to be partisan, because it is better to understand the basis of the discussion before debating the merits of the outcome.
1. Access Control Lists
An ACL system has at least five namespaces whose relationships need to be considered:
1. The namespace of file names: /tmp/foo
2. The namespace of unique object identifiers: (dev 22, inode 36, type file)
3. The namespace of user identities (uid 52476)
4. For each object type (file, disk, terminal, ...), the namespace of operations that object can perform.
5. The namespace of process identifiers (process 719)
In an access list system, it is assumed that there are two global mappings:
principal: process identity -> user identity
fs_lookup: file name -> object identity
That is, every process has an assigned user identity and every file name can be translated into a unique object identifier. Hanging off of every unique object is a further mapping:
acl: (object identity, user identity) -> operation(s)
Given a process proc that wishes to perform an operation op on an object object, the protection mechanism in an access list system is to test the following predicate:
op in acl(object, principal(p))
In the special case of the "open" call, this test is modified to be:
op in acl(fs_lookup(filename), principal(p))
2. Capability Systems
A capability system has at least four namespaces whose relationships need to be considered:
1. The namespace of unique object identifiers: (dev 22, inode 36, type file)
2. For each object type (file, disk, terminal, ...), the namespace of operations that object can perform.
3. The namespace of process identifiers (process 719)
4. The namespace of capabilities (object 10, operation set S)
In a capability system, it is assumed that there is one local mapping for each process
cap: (process identity, index) -> capability
That is, every process has a list of capabilities. Each capability names an object and also names a set of legal operations on that object.
There are also two "accessor" functions:
obj: capability -> object identity
ops: capability -> operations
Given a process proc that wishes to perform an operation op on an object object, the process must first possess a capability naming that object. That is, it must possess a capability at some index i such that
object == obj(caps(p,i))
To perform an operation, the process names the "index" i of that capability to be invoked from the per-process list. The protection mechanism in a capability system is to test the following predicate:
op in ops(caps(p,i))
Capability systems typically do not have a distinguished "open" call.
3. Some Differences
This section is incomplete.
Simply comparing the predicates shows that there is a significant difference between the two systems:
ACL: op in acl(object, principal(p))
Capability: op in ops(caps(p,i))
An obvious difference is that the capability model makes no reference to any notion of "principal".
Another obvious difference is that the capability model has a parameter "i". This allows the process to specify which authority it wants to exercise, which is why only the capability model can solve the confused deputy problem.
Access Rights and Persistence
What happens when the computer shuts down and all of the processes disappear?
In an access control list system, this is not a problem, because the login sessions disappear too. The user identity for a process is derived from who starts it, which is in turn derived from the login session. There is no need to record permissions on a per-process basis.
In a capability system, there is a definite problem. Solutions vary. Some systems provide a means to "pickle" a process or associate an initial capability list with each login. EROS makes all processes persistent.
Least Privilege
Capability systems allow a finer grain of protection. Each process has an exactly specified set of access rights. In contrast, access control list systems are coarser: every process executed by "fred" has the same rights. If you could always trust your programs, the coarser protection is fine. In an era where computer viruses are front page news, it is clearly necessary to be able to run some programs with finer restrictions than others.
Revocation
In an access control list, you can remove a user from the list, and that user can no longer gain access to the object. In a capability system, there is no equivalent operation. This is (arguably) a problem. Users come and go on projects, and you'ld like to be able to remove them when they should no longer have access to the project data. There are mechanisms to manage this in capability systems, but they are cumbersome.
Rights Transfer
In general, an access control list does not (in theory) allow rights transfer. If "fred" obtains access to an object, he cannot give this access to "mary" by transferring the object descriptor after the object has been opened. I say "in theory" because fred can still proxy for mary.
In a capability system, capabilities can be transferred. If a process running on behalf of fred can speak to a process running on behalf of mary, then capabilities may be transferred between the two processes. This can be useful: it allows you to hand a capability to a particular document to an editor. It can also be dangerous: if the program you are running is malicious, it can leak your authority.
4. The Equivalence Fallacy
There is an old claim that started appearing very early in papers on protection. The claim is:
Capabilities and access control lists are equivalent. They are simply two different ways to look at a Lampson access matrix. Any protection state that can be captured with one can be captured with the other.
People who have heard of capabilities almost universally believe this claim. Unfortunately, the claim is untrue. Worse, it obscures understanding of protection.
By way of debunking it, let me first explain what is meant by this statement. Then let's look at why it is incorrect.
The Lampson Access Matrix
The Lampson access matrix provides a way to look at the protection state of a system. It describes the access rights of the system at some instant in time. Each subject (a subject is always a process) in the system has a row in the table and each object (which can be either a process or an object) has a column. The entries in the table describe what access rights subject S has on object O:
O1 O2 O3
S1 r r,w x
S2 r r,w
The idea behind the claim is that if you look at a row of the access matrix, you are looking at a capability, and if you look at a column of the access matrix, you are looking at an access control list entry:
O1 O2 O3 O1 O2 O3
S1 r r,w x S1 r r,w x
S2 r r,w S2 r r,w
Unfortunately, this is wrong.
A Problem of Terminology
In the early security literature there was some sloppy use of the term "subject." In some papers the term "subject" was used to mean "process" while in others it was used to mean "principal" (i.e. a user). If we take subject to mean "principal", then the red column is not a capability; capabilities do not have anything to do with principals. If we take subject to mean "process" then the cyan row is not an access control list. ACLs do not refer to processes.
I have pointed this out to theorists who work on formal verification, and I have seen some good ones wave their hands and say "That's not a problem -- just expand all the processes, discard the notion of user, and it all works just fine, and the two models both fit in the matrix."
This is true in some mathematical sense. The problem is that after you do this you haven't got access control lists any more. Access control lists are specified in terms of users, not processes. One can argue (and I do) that specifying things in terms of processes is the right thing to do, but once you do this expansion you have lost a level of indirection that was crucial in understanding how access control lists work. There are useful properties you can prove about a system with all processes expanded that are not true if the user identity indirection is retained. It all depends on what operations are permitted, which brings me to the second, more serious problem:
A Problem of Evolution
While the terminology problem is fatal, there is a more subtle and more damaging error in the claim: it is a static view of a dynamic system.
If you freeze a computer system in mid-execution, you can draw an access matrix that describes it's current protection state, and you can ask if that state is okay. This is a very useful thing to be able to do. In practice, however, we aren't so much interested in what the instantaneous state of a system is as in how that state can evolve.
At a very high level of abstraction, proofs about security mechanisms all work the same way:
1. First you define what a "safe" state is. That is, you specify what it means for the policy to be enforced.
2. Second you establish an initial, "safe" condition. This is done by setting up the initial access matrix that you want. Since you are in control of how the access matrix is initialized, you should be able to establish just about any condition you would like.
The operative word is "should". In some protection systems, however, there are constraints on what can legally be placed in the matrix. A classical access control list system, for example, requires that every process owned by the same subject must have the same permissions. This means that if P1 and P2 are both owned by S1, their rows must be identical.
Unfortunately, it turns out that this is a fairly damaging restriction. It can make setting up the desired initial conditions extremely difficult (sometimes impossible), and it can make the verification of security policies mathematically undecidable -- which means you can't prove the security policy.
3. Third, you specify what the rules are for how the system operates. What are the steps that the machine will perform? How does each step affect the access matrix (if at all)?
4. Finally, you prove that if you start from the specified initial condition, and you take a sequence of steps, you always end up in a "safe" state. Actually, you have to show that this is true for all possible sequences of steps.
2.c) Discuss no read up and no write down security policies and the tranquility principle in Bell – La Padula security model. [7]

The Bell-LaPadula model focuses on data confidentiality and access to classified information, in contrast to the Biba Integrity Model which describes rules for the protection of data integrity.
In this formal model, the entities in an information system are divided into subjects and objects. The notion of a "secure state" is defined, and it is proven that each state transition preserves security by moving from secure state to secure state, thereby inductively proving that the system satisfies the security objectives of the model. The Bell-LaPadula model is built on the concept of a state machine with a set of allowable states in a system. The transition from one state to another state is defined by transition functions.
A system state is defined to be "secure" if the only permitted access modes of subjects to objects are in accordance with a security policy. To determine whether a specific access mode is allowed, the clearance of a subject is compared to the classification of the object (more precisely, to the combination of classification and set of compartments, making up the security level) to determine if the subject is authorized for the specific access mode. The clearance/classification scheme is expressed in terms of a lattice. The model defines two mandatory access control (MAC) rules and one discretionary access control (DAC) rule with three security properties:
1. The Simple Security Property states that a subject at a given security level may not read an object at a higher security level (no read-up).
2. The *-property (read star-property) states that a subject at a given security level must not write to any object at a lower security level (no write-down). The *-property is also known as the Confinement property.
3. The Discretionary Security Property uses an access matrix to specify the discretionary access control.
The transfer of information from a high-sensitivity paragraph to a lower-sensitivity document may happen in the Bell-LaPadula model via the concept of trusted subjects. Trusted Subjects are not restricted by the *-property. Untrusted subjects are. Trusted Subjects must be shown to be trustworthy with regard to the security policy.
This security model is directed toward access control and is characterized by the phrase: no read up, no write down. Compare the Biba model, the Clark-Wilson model and the Chinese Wall.
With Bell-LaPadula, users can create content only at or above their own security level (Secret researchers can create Secret or Top-Secret files but may not create Public files): no write-down. Conversely, users can view content only at or below their own security level (Secret researchers can view Public or Secret files, but may not view Top-Secret files): no read-up.
The Bell-LaPadula model explicitly defined its scope. It did not treat the following extensively:
• Covert channels. Passing information via pre-arranged actions was described briefly.
• Networks of systems. Later modeling work did address this topic.
• Policies outside multilevel security. Work in the early 1990s showed that MLS is one version of boolean policies, as are all other published policies.
[edit] Strong * Property
The Strong * Property is an alternative to the *-property in which subjects may write to objects with only a matching security level. Thus, the write up operation permitted in the usual *-property is not present, only a write to same operation. The Strong * Property is usually discussed in the context of multilevel database management systems and is motivated by integrity concerns.[5]
This Strong * Property was anticipated in the Biba model where it was shown that strong integrity in combination with the Bell-La Padula model resulted in reading and writing at a single level.
[edit] Tranquility principle
The tranquility principle of the Bell-LaPadula model states that the classification of a subject or object does not change while it is being referenced.
There are two forms to the tranquility principle:
1) The "principle of strong tranquility" states that security levels do not change during the normal operation of the system.
2) The "principle of weak tranquility" states that security levels do not change in a way that violates the rules of a given security policy.
--Another interpretation of the tranquility principles is that they both apply only to the period of time during which an operation involving an object or subject is occurring. That is, the strong tranquility principle means that an object's security level/label will not change during an operation (such as read or write); the weak tranquility principle means that an object's security level/label may change in a way that does not violate the security policy during an operation.
[edit] Limitations
• Restricted to Confidentiality.
• No policies for changing access rights; a complete general downgrade is secure; intended for systems with static security levels.
• Contains covert channels: a low subject can detect the existence of high objects when it is denied access.
• Sometimes, it is not sufficient to hide only the contents of objects. Their existence may have to be hidden, as well.
January-2005 [4]
1.b) With respect to an operating system, what is the primary security benefit of access control lists? [4]
Access Control Lists (ACLs) allow you to control what clients can access on your server. Directives in an ACL file can
o Screen out certain hosts to either allow or deny access to part of your server,
o Set up password authentication so that only users who supply a valid login and password may access part of the server,
o Delegate access control authority for a part of the server (such as a virtual host's URL space, or individual users' directories) to another ACL file,

January-2006 [11]
2.
c) What is the importance of "no read up" plus "no write down" rule for a multilevel security system? [3]
January-2007 [6]
6.
c) Why are each initiator and each target assigned to one or more security groups in an access control scheme based on security labels? [6]
7.a) Mandatory Access Control (MAC) is an access policy determined by the system, not the owner. Is it true or false? Justify. [3]
Mandatory Access Control
Mandatory access control (MAC) is an access policy determined by the system, not the owner. MAC is used in multilevel systems that process highly sensitive data, such as classified government and military information. A multilevel system is a single computer system that handles multiple classification levels between subjects and objects.
• Sensitivity labels: In a MAC-based system, all subjects and objects must have labels assigned to them. A subject's sensitivity label specifies its level of trust. An object's sensitivity label specifies the level of trust required for access. In order to access a given object, the subject must have a sensitivity level equal to or higher than the requested object.
• Data import and export: Controlling the import of information from other systems and export to other systems (including printers) is a critical function of MAC-based systems, which must ensure that sensitivity labels are properly maintained and implemented so that sensitive information is appropriately protected at all times.
Two methods are commonly used for applying mandatory access control:
• Rule-based access controls: This type of control further defines specific conditions for access to a requested object. All MAC-based systems implement a simple form of rule-based access control to determine whether access should be granted or denied by matching:
o An object's sensitivity label
o A subject's sensitivity label
• Lattice-based access controls: These can be used for complex access control decisions involving multiple objects and/or subjects. A lattice model is a mathematical structure that defines greatest lower-bound and least upper-bound values for a pair of elements, such as a subject and an object.
Few systems implement MAC. XTS-400 is an example of one that does.
4. Security Policy Design

January-2004 [10]
1.
b) Explain what is challenge response system? [4]
A challenge-response system is a program that replies to an e-mail message from an unknown sender by subjecting the sender to a test (called a CAPTCHA) designed to differentiate humans from automated senders. The system ensures that messages from people can get through and the automated mass mailings of spammers will be rejected. Once a sender has passed the test, the sender is added to the recipient's whitelist of permitted senders that won't have to prove themselves each time they send a message.
Challenge-response systems take a number of different approaches to the task of separating humans from machines. Typically, when a message is received, the system sends a reply that includes a URL linking the user to a Web site. At the Web site, the user is asked to perform some task that, while easy for a human, is beyond the capabilities of an automated spamming program. The system might ask the answer to a simple question, for example, or require the user to copy distorted letters or numbers displayed in an image.
Companies that provide free e-mail accounts often use a challenge-response system to ensure that their accounts aren't given out to spammer's programs. According to Carnegie Mellon's CAPTCHA Project, computerized programs can create thousands of new e-mail accounts per second, each of which can be used to send out reams of spam.

3. What are the essential components of a corporate security policy? [6]
3.c) he Three Components of an Effective Security Policy
While an information security policy is commonly referred to in the singular, an actual policy includes a suite of living documents: the security policy document, a standards document set and a procedures document set. While the policy itself gets the most attention, it often is the shortest document, sometimes taking up only two full pages.
An information security policy makes up for its brevity with the importance of its content. There are usually four key elements to a successful security policy document: to whom and what the policy applies, the need for adherence, a general description, and consequences of nonadherence. These four tenets of the policy provide the foundation for the remaining documents. Once this document is finished, it must be approved by the most senior manager in the organization and then made available to all employees.
The standards document set defines what needs to be done to implement security, including descriptions of required security controls and how those controls apply to the corporate environment. The document set should address a variety of security issues, including, but not limited to, the following: roles and responsibilities of security personnel, protection against malicious code, information and software exchange, user responsibilities, mobile computing, and access control. In addition to the common security concerns, the standards document set outlines compliance issues, government regulations and industry standards.
Much like the security policy document, the information security standards document does not usually need to be changed. Only if new systems, applications or regulations are introduced will the document set need to be modified.
The procedures document set makes up the final component of the corporate information security policy suite. This document should be the biggest of the three components, and it will also be the most flexible. This document set specifically outlines how security controls will be implemented and managed. The procedures should match accompanying standards, making sure that any given standard requires many tasks to be completed to achieve full compliance. This document provides many of the details that can make or break an effective information security policy.
Making the Policy Count—Enforcement
Once the hard work of creating an information security policy and getting it approved is finally done, the enforcement of the policy begins. All the effort put into creating the policy is of little worth unless the policy is followed by the corporation and sufficiently enforced. A compliance program or a policy assessment can be instrumental in assisting an organization’s attempts to enforce an information security policy.
A policy compliance review reveals whether a designed security control is employed and used correctly. Policy compliance reviews differ from traditional vulnerability assessments in many ways. For example, IT and security auditors should handle policy compliance reviews, while security operations personnel should handle vulnerability assessments. Also, policy compliance reviews are used to determine compliance of systems and applications under the new policy, while vulnerability assessments pinpoint specifically the vulnerabilities in systems and applications. Finally, policy compliance reviews use standards and regulations such as ISO 17799 and HIPAA as baselines for measurement, while vulnerability assessments traditionally use security incident and other vulnerability databases for tracking.
Together, policy compliance reviews and vulnerability assessments are critical first-line tactics to proactively defend against escalating security threats. Policy compliance reviews ensure that policy objectives are being met, and vulnerability assessments contribute to overall resiliency by identifying vulnerabilities.
Security compliance tools are available to help corporate systems comply with information security policies and regulatory standards. These compliance tools also aid in discovering, containing and fixing unpatched vulnerabilities. The tools are able to define a policy online in a database and automatically measure compliance across the network. In some cases, policy compliance data can be correlated with other security event data from a wide range of other security sources, including antivirus software, firewalls, intrusion detection systems and vulnerability assessment products.
The Policy Serves as the Foundation
With security threats inundating IT administrators and government regulations forcing corporate compliance, organizations can streamline their security efforts by creating and enforcing strong information security policies. As Internet threats increase and as government regulations become more stringent, the importance of solid security policies will also increase.
July-2004 [7]
1.c) A Data entry firm experiences on an average a loss of 10 files of 1000 bytes each per day due to power failures. The loss probability is 0.9. The cost of keying in a character is Rs. 0.005. At what cost burden the firm should consider putting in a loss prevention mechanism? [4]
2.
b) What is the basic purpose of a security model for computer systems? [3]
computer security model is a scheme for specifying and enforcing security policies. A security model may be founded upon a formal model of access rights, a model of computation, a model of distributed computing, or no particular theoretical grounding at all.

January-2006 [10]
1.d) State four primary functions of CERT. [4]
7. Write short notes on any three:
iii) Risk Assessment (RA) [6]
The Risk Assessment (RA) Policy document establishes the activities that need to be carried out by each Business Unit, Technology Unit, and Corporate Units (departments) within the organization. All departments must utilize this methodology to identify current risks and threats to the business and implement measures to eliminate or reduce those potential risks.
A risk assessment is performed in four distinct
Step 1: Data Compilation and Evaluation

Objective: To verify that the data are appropriate for use and are considered to be representative of current conditions.



Compile all
available data

Sort by
environmental medium
Evaluate data relative
to established criteria
Evaluate data relative
to established criteria
Step 2: Exposure Assessment
Objective: To estimate the type and magnitude of exposures from the chemicals of potential concern that are present at or migrating from a site/facility.
• Characterization of the Exposure Setting
o Characterizing the physical environment
o Identifying potential landuse scenarios

• Identification of Exposure Pathways

Components of an Exposure Pathway


• Quantification of Exposure
Toxicity Assessment


• Hazard Identification
determines whether exposure to a chemical can increase the incidence of a particular adverse health effect and determines the likelihood of occurrence in humans


• Dose-response assessment
presents the relationship between the magnitude of exposure and adverse effects
Risk Characterization


• Review toxicity and exposure
assessment output
• Quantify risks
• Combine risks across all pathways
• Assess & present uncertainties
• Consider site-specific human studies,
if available
• Summarize & present baseline risk
assessment characterization results


July-2006 [10]
1.
g) What are main services provided by Computer security incident response teams? [4]
1. A Computer Security Incident Response Team (CSIRT) is a service organization that is responsible for receiving, reviewing, and responding to computer security incident reports and activity. Their services are usually performed for a defined constituency that could be a parent entity such as a corporation, governmental, or educational organization; a region or country; a research network; or a paid client.
A CSIRT can be a formalized team or an ad hoc team. A formalized team performs incident response work as its major job function. An ad hoc team is called together during an ongoing computer security incident or to respond to an incident when the need arises.
1. A CSIRT may perform both reactive and proactive functions to help protect and secure the critical assets of an organization. There is not one standard set of functions or services that a CSIRT provides. Each team chooses their services based on the needs of their constituency. For a discussion of the wide range of services that a CSIRT can choose to provide, please see section 2.3 of the Handbook for CSIRTs.
Whatever services a CSIRT chooses to provide, the goals of a CSIRT must be based on the business goals of the constituent or parent organizations. Protecting critical assets are key to the success of both an organization and its CSIRT. The CSIRT must enable and support the critical business processes and systems of its constituency.
A CSIRT is similar to a fire department. Just as a fire department "puts out a fire" that has been reported, a CSIRT helps organizations contain and recover from computer security breaches and threats. The process by which a CSIRT does this is called incident handling. But just as a fire department performs fire education and safety training as a proactive service, a CSIRT can also provide proactive services. These types of services may include security awareness training, intrusion detection, penetration testing, documentation, or even program development. These proactive services can help an organization not only prevent computer security incidents but also decrease the response time involved when an incident occurs.

5.
b) What are the procedures involved in Quantitative Risk Assessment? How is the Annualized Loss Expectancy (ALE) calculated? [6]
Quantitative Risk Assessment (QRA) is a formalised specialist method for calculating numerical individual, environmental, employee and public risk level values for comparison with regulatory risk criteria.

Satisfactory demonstration of acceptable risk levels is often a requirement for approval of major hazard plant construction plans, including transmission pipelines, offshore platforms and LNG storage and import sites.

Each demonstration must be reviewed periodically to show that risks are controlled to an acceptable level according to applicable legislation and internal company governance requirements.
The Annualized Loss Expectancy (ALE) is the expected monetary loss that can be expected for an asset due to a risk over a one year period. It is defined as:
ALE = SLE * ARO
where SLE is the Single Loss Expectancy and ARO is the Annualized Rate of Occurrence.
An important feature of the Annualized Loss Expectancy is that it can be used directly in a cost-benefit analysis. If a threat or risk has an ALE of $5,000, then it may not be worth spending $10,000 per year on a security measure which will eliminate it.
One thing to remember when using the ALE value is that, when the Annualized Rate of Occurrance is of the order of one loss per year, there can be considerable variance in the actual loss. For example, suppose the ARO is 0.5 and the SLE is $10,000. The Annualized Loss Expectancy is then $5,000, a figure we may be comfortable with. Using the Poisson Distribution we can calculate the probability of a specific number of losses occurring in a given year:
Number of Losses
in Year Probability Annual Loss
0 0.6065 $0
1 0.3033 $10,000
2 0.0758 $20,000
≥3 0.0144 ≥$30,000
We can see from this table that the probability of a loss of $20,000 is 0.0758, and that the probability of losses being $30,000 or more is approximately 0.0144. Depending upon our tolerance to risk and our organization's ability to withstand higher value losses, we may consider that a security measure which costs $10,000 per year to implement is worthwhile, even though it is more than the expected losses due to the threat.

7.c) What is network management performance? What are the factors that affect the performance of network? [6]
Network performance management is the discipline of optimizing how networks function, trying to deliver the lowest latency, highest capacity, and maximum reliability despite intermittent failures and limited bandwidth
Factors affecting network performance
Unfortunately, not all networks are the same. As data is broken into component parts (often known frames, packets, or segments) for transmission, several factors can affect their delivery.
• Latency: It can take a long time for a packet to be delivered across intervening networks. In reliable protocols where a receiver acknowledges delivery of each chunk of data, it is possible to measure this as round-trip time.
• Packet loss: In some cases, intermediate devices in a network will lose packets. This may be due to errors, to overloading of the intermediate network, or to intentional discarding of traffic in order to enforce a particular service level.
• Retransmission: When packets are lost in a reliable network, they are retransmitted. This incurs two delays: First, the delay from re-sending the data; and second, the delay resulting from waiting until the data is received in the correct order before forwarding it up the protocol stack.
• Throughput: The amount of traffic a network can carry is measured as throughput, usually in terms such as kilobits per second. Throughput is analogous to the number of lanes on a highway, whereas latency is analogous to its speed limit.
These factors, and others (such as the performance of the network signaling on the end nodes, compression, encryption, concurrency, and so on) all affect the effective performance of a network. In some cases, the network may not work at all; in others, it may be slow or unusable. And because applications run over these networks, application performance suffers. Various intelligent solutions are available to ensure that traffic over the network is effectively managed to optimize performance for all users. See Traffic Shaping

January-2008 [14]
1.
b) What are the different types of messages define in SNMP? [4]
What are the Basic Components of SNMP?
An SNMP-managed network consists of three key components: managed devices, agents, and network management systems (NMS).
• Managed devices
– Contain an SNMP agent and reside on a managed network.
– Collect and store management information and make it available to NMS by using SNMP.
– Include routers, access servers, switches, bridges, hubs, hosts, or printers.
• Agent—A network-management software module, such as the Cisco IOS software, that resides in a managed device. An agent has local knowledge of management information and makes that information available by using SNMP.
• Network Management Systems (NMS)—Run applications that monitor and control managed devices. NMS provide resources required for network management. In the case study, the NMS applications are:
– UCD-SNMP
– MRTG
– HPOV
– CW2000 RME
Figure 1 illustrates the relationship between the managed devices, the agent, and the NMS.
Figure 1

An SNMP-Managed Network
About Basic SNMP Message Types and Commands
There are three basic SNMP message types:
• Get—NMS-initiated requests used by an NMS to monitor managed devices. The NMS examines different variables that are maintained by managed devices.
• Set—NMS-initiated commands used by an NMS to control managed devices. The NMS changes the values of variables stored within managed devices.
• Trap—Agent-initiated messages sent from a managed device, which reports events to the NMS.
The Cisco IOS generates SNMP traps for many distinct network conditions. Through SNMP traps, the Network Operations Center (NOC) is notified of network events, such as:
– Link up/down changes
– Configuration changes
– Temperature thresholds
– CPU overloads


Note For a list of Cisco-supported SNMP traps, go to http://www.cisco.com/public/mibs/traps/

Figure 2

SNMP Event Interactions Between the NMS and the Agent
What are SNMP MIBs?
A Management Information Base (MIB):
• Presents a collection of information that is organized hierarchically.
• Is accessed by using a network-management protocol, such as SNMP.
• References managed objects and object identifiers.
Managed object—A characteristic of a managed device. Managed objects reference one or more object instances (variables). Two types of managed objects exist:
• Scalar objects—Define a single object instance.
• Tabular objects—Define multiple-related object instances that are grouped together in MIB tables.
Object identifier (or object ID)—Identifies a managed object in the MIB hierarchy. The MIB hierarchy is depicted as a tree with a nameless root. The levels of the tree are assigned by different organizations and vendors.
Figure 3

The MIB Tree and Its Various Hierarchies
As shown in Figure 3, top-level MIB object IDs belong to different standards organizations while low-level object IDs are allocated by associated organizations. Vendors define private branches that include managed objects for products. Non standard MIBs are typically in the experimental branch.
A managed object has these unique identities:
• The object name—For example, iso.identified-organization.dod.internet.private.enterprise.cisco. temporary variables.AppleTalk.atInput
or
• The equivalent object descriptor—For example, 1.3.6.1.4.1.9.3.3.1.
SNMP must account for and adjust to incompatibilities between managed devices. Different computers use different data-representation techniques, which can compromise the ability of SNMP to exchange information between managed devices.
What is SNMPv1?
SNMPv1 is the initial implementation of the SNMP protocol and is described in RFC 1157 (http://www.ietf.org/rfc/rfc1157).
SNMPv1:
• Functions within the specifications of the Structure of Management Information (SMI).
• Operates over protocols such as User Datagram Protocol (UDP), Internet Protocol (IP), OSI Connectionless Network Service (CLNS), AppleTalk Datagram-Delivery Protocol (DDP), and Novell Internet Packet Exchange (IPX).
• Is the de facto network-management protocol in the Internet community.
The SMI defines the rules for describing management information by using Abstract Syntax Notation One (ASN.1). The SNMPv1 SMI is defined in RFC 1155 (http://www.ietf.org/rfc/rfc1155). The SMI makes three specifications:
• ASN.1 data types
• SMI-specific data types
• SNMP MIB tables
SNMPv1 and ASN1 Data Types
The SNMPv1 SMI specifies that all managed objects must have a subset of associated ASN.1 data types. Three ASN.1 data types are required:
• Name—Serves as the object identifier (object ID).
• Syntax—Defines the data type of the object (for example, integer or string). The SMI uses a subset of the ASN.1 syntax definitions.
• Encoding—Describes how information associated with a managed object is formatted as a series of data items for transmission over the network.
SNMPv1 and SMI-Specific Data Types
The SNMPv1 SMI specifies the use of many SMI-specific data types, which are divided into two categories:
• Simple data types—Including these three types:
– Integers—A signed integer in the range of -2,147,483,648 to 2,147,483,647.
– Octet strings—Ordered sequences of zero to 65,535 octets.
– Object IDs— Come from the set of all object identifiers allocated according to the rules specified in ASN.1.
• Application-wide data types—Including these seven types:
– Network addresses—Represent addresses from a protocol family. SNMPv1 supports only 32-bit IP addresses.
– Counters—Nonnegative integers that increase until they reach a maximum value; then, the integers return to zero. In SNMPv1, a 32-bit counter size is specified.
– Gauges—Nonnegative integers that can increase or decrease but retain the maximum value reached.
– Time ticks—A hundredth of a second since some event.
– Opaques—An arbitrary encoding that is used to pass arbitrary information strings that do not conform to the strict data typing used by the SMI.
– Integers—Signed integer-valued information. This data type redefines the integer data type, which has arbitrary precision in ASN.1 but bounded precision in the SMI.
– Unsigned integers—Unsigned integer-valued information that is useful when values are always nonnegative. This data type redefines the integer data type, which has arbitrary precision in ASN.1 but bounded precision in the SMI.
The SNMPv1 SMI defines structured tables that are used to group the instances of a tabular object (an object that contains multiple variables). Tables contain zero or more rows that are indexed to allow SNMP to retrieve or alter an entire row with a single Get, GetNext, or Set command.
SNMPv1 Protocol Operations
SNMP is a simple request-response protocol. The NMS issues a request, and managed devices return responses. This behavior is implemented by using one of four protocol operations:
• Get—Used by the NMS to retrieve the value of one or more object instances from an agent. If the agent responding to the Get operation cannot provide values for all the object instances in a list, the agent does not provide any values.
• GetNext—Used by the NMS to retrieve the value of the next object instance in a table or list within an agent.
• Set—Used by the NMS to set the values of object instances within an agent.
• Trap—Used by agents to asynchronously inform the NMS of a significant event.
What is SNMPv2?
SNMPv2 is an improved version of SNMPv1. Originally, SNMPv2 was published as a set of proposed Internet standards in 1993; currently, it is a Draft Standard. As with SNMPv1, SNMPv2 functions within the specifications of the SMI. SNMPv2 offers many improvements to SNMPv1, including additional protocol operations.
SNMPv2 and SMI
The SMI defines the rules for describing management information by using ASN.1.
RFC 1902 (http://www.ietf.org/rfc/rfc1902) describes the SNMPv2 SMI and enhances the SNMPv1 SMI-specific data types by including:
• Bit strings—Comprise zero or more named bits that specify a value.
• Network addresses—Represent an address from a protocol family. SNMPv1 supports 32-bit IP addresses, but SNMPv2 can support other types of addresses too.
• Counters—Non-negative integers that increase until they reach a maximum value; then, the integers return to zero. In SNMPv1, a 32-bit counter size is specified. In SNMPv2, 32-bit and 64-bit counters are defined.
SMI Information Modules
The SNMPv2 SMI specifies information modules, which include a group of related definitions. Three types of SMI information modules exist:
• MIB modules—Contain definitions of interrelated managed objects.
• Compliance statements—Provide a systematic way to describe a group of managed objects that must conform to a standard.
• Capability statements—Used to indicate the precise level of support that an agent claims with respect to a MIB group. An NMS can adjust its behavior towards agents according to the capability statements associated with each agent.
SNMPv2 Protocol Operations
The Get, GetNext, and Set operations used in SNMPv1 are exactly the same as those used in SNMPv2. SNMPv2, however, adds and enhances protocol operations. The SNMPv2 trap operation, for example, serves the same function as the one used in SNMPv1. However, a different message format is used.
SNMPv2 also defines two new protocol operations:
• GetBulk—Used by the NMS to efficiently retrieve large blocks of data, such as multiple rows in a table. GetBulk fills a response message with as much of the requested data as fits.
• Inform—Allows one NMS to send trap information to another NMS and receive a response. If the agent responding to GetBulk operations cannot provide values for all the variables in a list, the agent provides partial results.
About SNMP Management
SNMP is a distributed-management protocol. A system can operate exclusively as an NMS or an agent, or a system can perform the functions of both.
When a system operates as both an NMS and an agent, another NMS can require the system to:
• Query managed devices and provide a summary of the information learned.
• Report locally stored management information.
About SNMP Security
SNMP lacks authentication capabilities, which results in a variety of security threats:
• Masquerading—An unauthorized entity attempting to perform management operations by assuming the identity of an authorized management entity.
• Modification of information—An unauthorized entity attempting to alter a message generated by an authorized entity, so the message results in unauthorized accounting management or configuration management operations.
• Message sequence and timing modifications—Occurs when an unauthorized entity reorders, delays, or copies and later replays a message generated by an authorized entity.
• Disclosure—Results when an unauthorized entity extracts values stored in managed objects. The entity can also learn of notifiable events by monitoring exchanges between managers and agents.

3.
b) What is a SNMP? Explain the SNMP model of a managed network with block diagram showing all the components. [10]
SNMP is used in network management systems to monitor network-attached devices for conditions that warrant administrative attention. It consists of a set of standards for network management, including an Application Layer protocol, a database schema, and a set of data objects.[1]
SNMP exposes management data in the form of variables on the managed systems, which describe the system configuration. These variables can then be queried (and sometimes set) by managing applications.
1. SNMP basic components
An SNMP-managed network consists of three key components:
2. Managed devices
3. Agents
4. Network-management systems (NMSs)
A managed device is a network node that contains an SNMP agent and that resides on a managed network. Managed devices collect and store management information and make this information available to NMSs using SNMP. Managed devices, sometimes called network elements, can be any type of device including, but not limited to, routers, access servers, switches, bridges, hubs, IP telephones, computer hosts, and printers.
An agent is a network-management software module that resides in a managed device. An agent has local knowledge of management information and translates that information into a form compatible with SNMP.
A network management system (NMS) executes applications that monitor and control managed devices. NMSs provide the bulk of the processing and memory resources required for network management. One or more NMSs may exist on any managed network
The SNMP concepts
The SNMP model defines two entities, which works in a client-server mode.
The server is called an agent and is located on the device to supervise. The client part is the SNMP manager, in charge of data collection and display. The version 3 names the client entity instead of agent. The agent listens to requests coming from the manager on the UDP port 161, while the manager listens to alarms “trap” coming from the agent on port UDP 162.

The SNMP manager
The SNMP manager should be installed on a powerful system connected to the enterprise network. Another common name for it, is management station.
His job is to acquire with SNMP requests, information about devices connected to the network. Gathered information are then processed and displayed in tables, graphs, gauges, histograms, for an easier interpretation by human being.
The management station includes the following components:
The graphical user interface
Use to display in a friendly manner collected data.
The database
The database is used by the manager to store collected data.
The transport protocol
Protocol is used to communicate between manager and agent.
Le SNMP engine
This is the kernel of the application. It manages all the tasks like an orchestral chief.
Agent management profiles
This is a set of rules that defines how to access to the agents. All this profiles help to build the topology map.
The SNMP agent
The agent is a mix of software and hardware or only software and is located in the device. Most of network devices are equipped by default, other systems having a standard operating system are able to behave like an agent by running a simple process. Most of them, Windows platform, Novell Server, Unix and Linux systems own their agents. Hubs and MAU, for most of them are also manageable.
Agents are composed of:
A transport protocol stack Responsible for sending and receiving SNMP packet
An SNMP engine Process requests and formats data.
A management profiles These are the rules that control access to MIB variables and manage which requests are authorized.
Objects and MIBs
OID
Each agent has a set of associated objects that could be interrogated by the management station. An object is the abstraction of a physical or logical device component. A table object is a set of objects grouped in a table.
Examples of physical elements in a device: power supply, fans, boards, probe …
Examples of logical elements: processes, buffers, file …
Objects are defines in MIB (Management Information Bases ) files. Agents compatible with SNMP version 1 should be able at least to support the objects defined in RFC1155 and RFC 1213 which are in fact standard MIB files.
To simplify the object reading contained in an agent, a standard hierarchy structure is used.
Posted by DOEACC NOTES at 2:12 AM0 comments

NETWORK MANAGEMENT AND INFORMATION SYSTEMS-CHAPTER 2

2. Identification & Authentication

January-2004 [10]
2. a) What are two common techniques used to protect a password file? [6]
Form of stored passwords
More secure systems store each password in a cryptographically protected form, so access to the actual password will still be difficult for a snooper who gains internal access to the system, while validation of user access attempts remains possible. common approache stores only a "hashed" form of the plaintext password. When a user types in a password on such a system, the password handling software runs through a
cryptographic hashalgorithm, and if the hash value generated from the user's entry matches the hash stored in the password database, the user is permitted access.
Rate at which an attacker can try out guessed passwords
The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time out of several seconds after a small number (e.g., three) of failed password entry attempts. In the absence of other vulnerabilities, such systems can be effectively secure with relatively simple passwords, if they have been well chosen and are not easily guessed
Ensure that good passwords are selected so that they cannot easily be cracked, or use a technology in which passwords are not located in the password file.

c) Why is authentication an important requirement for network security? [4]
Authentication is any process by which you verify that someone is who they claim they are. This usually involves a username and a password, but can include any other method of demonstrating identity, such as a smart card, retina scan, voice recognition, or fingerprints. Authentication is equivalent to showing your drivers license at the ticket counter at the airport.

July-2004 [21]
1. f) A password cracker knows for certain that a genuine user uses a password that is four characters long drawn from a set of 100 characters. He decides to crack the password by brute force method. What is the maximum number of combinations he needs to test? How long would it take (in years) for him to crack the password if it takes 100 msec to test each password? [4]
A Brute-Force attack is method of breaking a cipher (that is, to decrypt a specific encrypted text) by trying every possible key. Feasibility of brute force attack depends on the key length of the cipher, and on the amount of computational power available to the attacker. Cain's Brute-Force Password Cracker tests all the possible combinations of characters in a pre-defined or custom character set against the encrypted passwords loaded in the brute-force dialog.

The key space of all possible combination of passwords to try is calculated using the following formula:

KS = L^(m) + L^(m+1) + L^(m+2) + ........ + L^(M)

where
L = character set length
m = min length of the key
M = max length of the key

For example, when you want to crack an half of a LanManager passwords (LM) using the character set "ABCDEFGHIJKLMNOPQRSTUVWXYZ" of 26 letters, the brute-force cracker have to try KS = 26^1 + 26^2 + 26^3 + ...... + 26^7 = 8353082582 different keys. If you want to crack the same password using the character set "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!@#$%^&*()-_+=~`[]{}\:;"'<>,.?/", the number of keys to try rises at 6823331935124.

Exhaustive key search cracking could take a very long time to complete however if the character set is the right one the password will be cracked; its only matter of time

2. a)List any four biometric methods other than voice print used for user authentication. Discuss the user registration and authentication procedures in the case of voice print biometric key. [8]

Voice-based biometric security must support identification, verification, and classification. Voice biometrics, however, are an excellent option for application security. Voice biometrics, which measure the user's voice, require only a microphone—a robust piece of equipment as close as the nearest telephone. A voiceprint is a set of measurable characteristics of a human voice that uniquely identifies an individual. These characteristics, which are based on the physical configuration of a speaker's mouth and throat, can be expressed as a mathematical formula. The term applies to a vocal sample recorded for that purpose, the derived mathematical formula, and its graphical representation. Voiceprints are used in
voice IDsystems for user authentication.
Voice biometrics provide three different services: identification, verification, and classification. Speaker verification authenticates a claim of identity, similar to matching a person's face to the photo on their badge. Speaker identification selects the identity of a speaker out of a group of possible candidates, similar to finding a person's face in a group photograph. Speaker classification determines age, gender, and other characteristics. Here, I'll focus on speaker verification resources ("verifiers").
Older verifiers used simple voiceprints, which are essentially verbal passwords. During verification, the resource matches a user's current utterance against a stored voiceprint.
Modern verifiers create a model of a user's voice and can match against any phrase the user utters. This is a terrific advantage. First, ordinary dialogue can be used for verification, so an explicit verification dialogue may be unnecessary. Second, applications can challenge users to speak random phrases, which make attacks with stolen speech extremely difficult.
Architecture, Resources
The prototype I present uses a telephony server to connect to the telephone network, a speech-technology server, and an application server to execute my code and control the other two servers; see
Figure 1.
For the telephony server, speech-technology resource server, and application server, I use BeVocal's free developer hosting (http://cafe.bevocal.com/). BeVocal hosts VoiceXML-based applications. VoiceXML is an open specification from the W3C's "voice browser" working group (http://www.w3.org/Voice/). XML-based VoiceXML lets you write scripts with dialogues that use spoken or DTMF input, and text-to-speech or prerecorded audio for output. My scripts reside on the Internet and are fetched by the VoiceXML server via HTTP. Since the VoiceXML specification does not define a voice biometrics API, I used BeVocal's extensions to VoiceXML.
Another company that offers voice biometrics hosting is Voxeo (http://techpreview.voxeo.com/); Voxeo uses a different API. Voxeo lets you send tokens through HTTP to initiate calls from the VoiceXML server to users, which is convenient for web-based applications—not to mention more secure, as the application can easily restrict the calls to predefined telephone numbers. Both BeVocal and Voxeo offer free technical support—and they need to because documentation is often sparse or incorrect. Loggers track script execution and report errors, but you'll need your sleuthing skills to uncover the actual errors.
Enrollment
Before users can use the verifier, the verifier must obtain a model of the user's voice—users must enroll. During enrollment, users speak several phrases, usually similar to those used during verification.
Listing Onehighlights the enrollment application (the complete source code and related files are available electronically; see "Resource Center," page 5).
Users' voice models are stored in a database at the VoiceXML server. Each developer has a separate database, and the developer assigns keys to each user. Generally, users speak or enter ID numbers, which act as the keys.
VoiceXML is based around forms, each with several fields to be filled. After collecting user ID numbers in a previous form,
Listing One's tag starts collecting speech for enrollment. The tags create JavaScript variables, which are initialized via JavaScript functions (not shown) with numeric and text versions of a four-digit random number. BeVocal's tag activates both a verifier and speech-recognition resource. The keyExpr attribute of gives the verifier the key under which to store the voice model; in our case, the ID number. The name attribute defines a field that accepts the results of recognition, and type specifies the input to expect: a four-digit number. A tag sends text to the text-to-speech resource, which plays it to the user. The tag introduces a pause between the introductory prompt and challenge phrase. The tag is a directive to the text-to-speech system: The string "1234" should be pronounced "one two three four" and not "one thousand two hundred thirty four."
If users do not speak, the tag is activated; if the user's utterance does not match the grammar (is not a four-digit number), the tag is activated. In either case, a counter decrements; when the counter drops to zero, I emulate transfer to a human agent. This counter defends the application against malicious users who tie up the server, and helps users who are having trouble.
When user utterances match the grammar, the tag is activated, and compares the utterance with the challenge. This ensures that the verifier hasn't inadvertently collected noise and mistaken it for a valid utterance—and that someone is not trying to spoof the system with prerecorded utterances. If the utterance matches the challenge, the application goes to the next step via the tag; if they do not match, the recognition result is reset via the tag, which causes the to execute again. In the remainder of the enrollment application, users repeat a different four-digit number and current date.
Verification
To verify a user's identity, a user first claims an identity; in our case, by providing an ID number. Listing Twois the application after a user has made a claim.
The BeVocal API does not let you check whether the database of voice models actually contains the needed model. Instead, if the database key is incorrect, the server interrupts itself in midprompt when the mistake is discovered—which annoys me and users. Fortunately, a little judicious hacking solves the problem. Listing Two(a) starts with the tag, which activates the verifier and speech-recognition resource, and defines both a field to receive the results and the type of input expected. The identity claim is passed to the verifier via the keyExpr attribute. The tag sets the total time to perform recognition to 1 ms, and the prompt is only 1-ms long. The first is processed if the key is in the database. The second is activated if the key is invalid; the user is sent back to the form that collects the ID number. As in enrollment, too many errors will send a user to an operator.
With a valid key, the system moves on to Listing Two(b). Variables are initialized with a random challenge phrase, and an announcement plays to users. The tag starts a verifier and speech-recognition resource, and users are asked to speak the four-digit challenge number. If users are silent () or say something other than a number (), reprompts them.
If users speak a number, the tag is activated, and checks the number of attempts. If the user is still under the limit, the first compares the utterance to the challenge number. If they are not the same, resets the results and users try again. Otherwise, tags examine the decision of the verifier, which returns one of three confidence levels. If users are accepted, the transaction is approved; if users are decisively rejected, they are sent to operators for further assistance. If neither is true—if the result is "unsure"—users are sent to further tags (not shown) with a second or third round of challenge phrases. Users who cannot be verified are sent to operators.
There does not appear to be any one method of biometric data gathering and reading that does the "best" job of ensuring secure authentication. Each of the different methods of biometric identification have something to recommend them. Some are less invasive, some can be done without the knowledge of the subject, some are very difficult to fake.
Face recognition: Of the various biometric identification methods, face recognition is one of the most flexible, working even when the subject is unaware of being scanned. It also shows promise as a way to search through masses of people who spent only seconds in front of a "scanner" - that is, an ordinary digital camera. Face recognition systems work by systematically analyzing specific features that are common to everyone's face - the distance between the eyes, width of the nose, position of cheekbones, jaw line, chin and so forth. These numerical quantities are then combined in a single code that uniquely identifies each person.
Fingerprint identificationFingerprints remain constant throughout life. In over 140 years of fingerprint comparison worldwide, no two fingerprints have ever been found to be alike, not even those of identical twins. Good fingerprint scanners have been installed in PDAs like the iPaq Pocket PC; so scanner technology is also easy. Might not work in industrial applications since it requires clean hands. Fingerprint identification involves comparing the pattern of ridges and furrows on the fingertips, as well as the minutiae points (ridge characteristics that occur when a ridge splits into two, or ends) of a specimen print with a database of prints on file.
Hand geometry biometricsHand geometry readers work in harsh environments, do not require clean conditions, and forms a very small dataset. It is not regarded as an intrusive kind of test. It is often the authentication method of choice in industrial environments.
Retina scanThere is no known way to replicate a retina. As far as anyone knows, the pattern of the blood vessels at the back of the eye is unique and stays the same for a lifetime. However, it requires about 15 seconds of careful concentration to take a good scan. Retina scan remains a standard in military and government installations.
Iris scanLike a retina scan, an iris scan also provides unique biometric data that is very difficult to duplicate and remains the same for a lifetime. The scan is similarly difficult to make (may be difficult for children or the infirm). However, there are ways of encoding the iris scan biometric data in a way that it can be carried around securely in a "barcode" format. (See the SF in the News article Biometric Identification Finally Gets Startedfor some detailed information about how to perform an iris scan.)
SignatureA signature is another example of biometric data that is easy to gather and is not physically intrusive. Digitized signatures are sometimes used, but usually have insufficient resolution to ensure authentication.
Voice analysisLike face recognition, voice biometrics provide a way to authenticate identity without the subject's knowledge. It is easier to fake (using a tape recording); it is not possible to fool an analyst by imitating another person's voice.
Voice authentication, also known as “speaker verification”, is defined as the automated verification of a person’s claimed identity, based on unique characteristics of their voice. A simple microphone is enough to record the voice, then most of the algorithm are analysing the voice spectrum. Do not confuse speaker/voice recognition with speech recognition. Speech recognition is the recognition of what you are saying, and not who you are Do not confuse speaker/voice recognition with speech recognition. Speech recognition is the recognition of what you are saying, and not who you are
In biometrics, the instance of a security system failing to verifyor identifyan authorized person. Also referred to as a type I error, a false rejection does not necessarily indicate a flaw in the biometric system; for example, in a fingerprint-based system, an incorrectly aligned finger on the scanneror dirt on the scanner can result in the scanner misreading the fingerprint, causing a false rejection of the authorized user.
The false rejection rate, or FRR, is the measure of the likelihood that the biometric security system will incorrectly reject an access attempt by an authorized user. A system’s FRR typically is stated as the ratio of the number of false rejections divided by the number of identification attempts.
Crossover Error Rate (CER) is a comparison metric for different biometric devices and technologies. It is the error rate at which the false acceptance rate (FAR) equals the false rejection rate (FRR). As an identification device becomes more sensitive or accurate, its FAR decreases while its FRR increases. The CER is the point at which these two rates are equal, or cross over.
In biometrics, the instance of a security system incorrectly verifyingor identifyingan unauthorized person. Also referred to as a type II error, a false acceptance typically is considered the most serious of biometric security errors as it gives unauthorized users access to systems that expressly are trying to keep them out.
The false acceptance rate, or FAR, is the measure of the likelihood that the biometric security system will incorrectly accept an access attempt by an unauthorized user. A system’s FAR typically is stated as the ratio of the number of false acceptances divided by the number of identification attempts.
4. c) What are the three phases of authentication in Kerberos v4? Discuss each phase briefly bringing out clearly how certain security threats are overcome in each phase. [9]
Kerberos Authentication Process
There are several phases to Kerberos authentication. In the first phase, the client obtains credentials to be used to request access to kerberized services. In the second phase, the client requests authentication for a specific service. In the final phase, the client presents those credentials to the service. Figure 2-8 and Figure 2-9illustrate this process.
Figure 2-8 shows the first phase, in which the client, labeled Alice in the figure, requests credentials from the Kerberos KDC.
Figure 2-8 Requesting credentials from the KDC
The steps are as follows:
1. Alice sends a request to the KDC for credentials. The KDC prompts Alice for a user name and password (or other authentication information), checks the authentication information against the data in the directory server, and (assuming the authentication is valid) gets Alice’s private key from the directory server.
2. The KDC creates an encryption key (called a session key) for use by Alice the next time she wants to request service from a kerberized server and encrypts the key with Alice’s private key. It also creates an identification credential called a ticket-granting ticket (TGT), which contains a copy of the session key encrypted with the KDC’s private key (plus other information). The KDC sends both credentials to Alice. Alice decrypts the session key and stores it for later. She can’t decrypt the TGT or modify it, but saves it for later use as well. Both the session key and the TGT include timestamps and expiration times to limit the chances of their being intercepted and used by unauthorized persons.
In the second phase, Alice uses the TGT to request identification credentials from the KDC in order to use a kerberized service, labeled Bob in the figure. Because Alice has a TGT, the KDC does not have to reauthenticate her, so Alice is not asked again for her password. In the third phase, Alice sends the credentials to Bob, and Bob sends authentication information to Alice. The second and third phases are illustrated in Figure 2-9.
Figure 2-9 Authenticating the client and server with a Kerberos ticket
The steps are as follows:
1. Alice sends to the KDC a request to open a session with Bob, together with the TGT that the KDC issued earlier. Because the TGT is encrypted with the KDC’s private key, it cannot have been altered, and the KDC accepts it as proof that Alice has been authenticated.
2. The KDC decrypts the TGT and extracts the session key it issued earlier to Alice. (Recall that when the KDC sent the session key to Alice earlier, it was encrypted with Alice’s private key, so only the KDC and Alice can know this session key.) The KDC then generates a random value, encrypts it with the session key, and sends it to Alice. It also creates a ticket for Alice to send to Bob. This ticket contains a new session key, the same random value that was sent to Alice, and an indication that the request for a session came from Alice. This key is encrypted with Bob’s private key, so Alice (or an intruder) cannot read or modify it. The KDC sends the ticket to Alice.
3. Alice sends the ticket to Bob. Bob decrypts it with his private key. Because only the KDC and Bob know this key, Bob knows the ticket was issued by the KDC. Bob extracts the random value and the session key, and encrypts the random value with the session key.
4. Bob sends the encrypted value to Alice. Because Alice knows that only she and Bob have this session key, she knows that the credential must have come from Bob. She checks the value and compares it with the one she received earlier from the KDC. If they match, she knows the message was not interfered with, and she accepts that Bob has been authenticated by the KDC.
Note that this procedure does not involve sending either Alice’s or Bob’s private key over the network. Both Alice and Bob are authenticated to each other, so Bob knows that Alice is a valid user and Alice knows that Bob is the server with which she intended to do business. All credentials are further protected with timestamps and expiration times. Kerberos has other security features as well; for details, see the MIT Kerberos website at http://web.mit.edu/kerberos/.
Kerberos and Authorization
Kerberos is an authentication protocol, not an authorization protocol. That is, it verifies the identities of both the client and the server, but it does not include any information about whether the client has a right to use the services provided by the server. In terms of the preceding discussion, once Bob is satisfied that the request for services really came from Alice, it is up to Bob to determine whether to grant Alice access to those services. The ticket that Bob receives from Alice contains enough information about Alice to enable Bob to make that determination.
Starting with version 5, Kerberos tickets provide a mechanism for the tamperproof transmission of authorization information. When the client requests a ticket, it includes information about itself in the request and can request that the KDC include additional authorization in the ticket. The KDC inserts this information into the authorization data field of the ticket and forwards it to the server. Kerberos does not define how this authorization information should be encoded; it provides only a secure mechanism for its transmission. It is up to the client and server to implement the authorization protocol.
Single Signon
Mac OS X uses Kerberos for single signon authentication, which relieves users from entering a name and password separately for every kerberized service. With single signon, after a user enters a name and password in the login window, the user does not have to enter a name and password for Apple file service, mail service, or other services that use Kerberos authentication. In other words, Kerberos authenticates the user once, and thereafter uses tickets to identify the user (see “Authentication, Identification, and Authorization”).
To take advantage of the single signon feature, services must be configured for Kerberos authentication and users and services must use the same Kerberos KDC. For Mac OS X Server v10.3 and later, user accounts in an LDAP directory that have a password type of Open Directory use the server’s built-in KDC. These user accounts are automatically configured for Kerberos and single signon. The server’s kerberized services also use the server’s built-in KDC and are automatically configured for single signon. See Mac OS X Server Open Directory Administration (at http://www.apple.com/server/documentation/) for details.
Large Networks
In “Kerberos Authentication Process,”the Kerberos Key Distribution Center (KDC) is treated as a single entity. However, a KDC consists of two separate software processes: the ticket-granting server and the authentication server. The authentication server verifies a user’s identity by prompting the user for a name and password and asking the directory server for the user’s password. The authentication server then looks up the user’s private key, generates a session key, and creates the ticket-granting ticket (TGT), as shown in Figure 2-8. Thereafter, the user sends the TGT to the ticket-granting server whenever the services of a kerberized server are required, and the ticket-granting server issues the ticket, as shown in Figure 2-9.
Many networks are too large to efficiently store all the information about users and computers in a single directory server. Instead, a distributed model is used, where there are a number of directory servers, each serving a subset of the network. In Kerberos parlance, this subset is referred to as a realm. Each realm has its own ticket-granting server and authentication server. If a user needs a ticket for a service in a different realm, the authentication server issues a TGT and the user sends the TGT to the authentication server, as before. The authentication server then issues a ticket, not for the desired service but for the remote ticket-granting server for the realm that the service is in. The user then sends the ticket to the remote ticket-granting server to get the ticket for the actual service.
In fact, in a large network, the user might have to contact the remote ticket-granting server in a sequence of realms before finally getting the ticket for the desired service. When a ticket for the application service is finally issued, it contains an enumeration of all the realms consulted in the process of requesting the ticket. An application server that applies strict authorization rules is permitted to reject authentication that passes through realms that it does not trust.
Although limited cross-realm authentication was possible in Kerberos v4, the full implementation of this feature is new in Kerberos v5.
Public Keys
In principle, public key authentication works in much the same way as private key authentication, with one major difference: public keys do not have to be kept secret, so there is no need to encrypt them or send them over secure channels. The public key can be provided by a server, in a certificate, or through some other method. Figure 2-10 illustrates public key authentication using an authentication server.
Figure 2-10 Public key authentication
The steps are as follows:
1. Alice sends Bob a request to talk.
2. Bob generates a random value and sends it to Alice as a challenge.
3. Bob requests Alice’s public key from the authentication server.
4. The authentication server sends the unencrypted public key to Bob.
5. Alice encrypts the random value with her private key and sends it to Bob.
6. Bob decrypts the value with Alice’s public key.
7. Bob compares the decrypted value with the original value to verify that they are identical. Alice has now authenticated herself to Bob.
Bob can authenticate himself to Alice in exactly the same way.
Notice that there is no need for the authentication server to store any sensitive material: the public keys do not have to be stored securely, and the authentication server does not need to hold passwords because it never has to verify one. However, it is necessary to ensure that no one alters the public keys stored in the authentication server. Otherwise, Eve, for example, could substitute her public key for Alice’s and then could impersonate Alice. Actual implementations of server-based public key authentication systems, therefore, such as used by Novell Corporation’s NDS (Novell Directory Services), include additional security features.
Note, however, that it is not necessary to have an authentication server in order to use public key authentication. Digital certificates can take the place of a central distributor of public keys.
Certificates
The problem of ensuring that a public key actually belongs to the entity you wish to authenticate can be addressed using digital certificates. Authentication using a digital certificate is illustrated in Figure 2-11.
Figure 2-11 Authentication with a digital certificate
The steps are as follows:
1. Alice sends Bob a request to talk.
2. Bob generates a random value and sends it to Alice as a challenge.
3. Alice encrypts the value with her private key and sends it to Bob. She also sends Bob her digital certificate containing her public key.
4. Bob verifies the digital certificate and uses the public key to decrypt the value.
5. Bob compares the decrypted value to the original value, verifying that it was truly Alice who sent him the certificate.
In practice, Alice could digitally sign her response to Bob rather than separately encrypting the challenge. Certificates are described in more detail in “Digital Certificates,”and digital signatures are discussed in “Digital Signatures.”
January-2005 [3]
3. d) Explain the difference between identification and authentication. [3]
Authentication is the process by which a user establishes his or her identity when accessing a network application or service. In the identification phase, the server is informed of a user's identity, after which the server asks for authentication, or proof, that the user is who she says she is. Several identification processes consider a username and password combination to be an effective authentication process. User Identification: In virtue of Data base server Identification tries to find whether you are registered user in the database or not. It applies to whole database server. Authentication: It applies for special Privileges for a user in the database. Like whether a user is authenticated to run system procedures or not. Or whether he is authenticated to create new users or not.
January-2006 [6]
7. Write short notes on any three:
iv) Biometrics [6]
Biometrics are used to identify the input sample when compared to a template, used in cases to identify specific people by certain characteristics.
possession-based
using one specific "token" such as a security tag or a card
knowledge-based
the use of a code or password
A biometric system can provide the following two functions [21]:
Verification
Authenticates its users in conjunction with a smart card, username or ID number. The biometric template captured is compared with that stored against the registered user either on a smart card or database for verification.
Identification
Authenticates its users from the biometric characteristic alone without the use of smart cards, usernames or ID numbers. The biometric template is compared to all records within the database and a closest match score is returned. The closest match within the allowed threshold is deemed the individual and authenticated
Types of Biometrics
Physiological
Iris
Fingerprint (including nail)
Hand (including knuckle, palm, vascular)
Face
Voice
Retina
DNA
Even Odor, Earlobe, Sweat pore, Lips
Behavioral
Signature
Keystroke
Voice
Gait
Increase security - Provide a convenient and low-cost additional tier of security.* Reduce fraud by employing hard-to-forge technologies and materials. For e.g.Minimise the opportunity for ID fraud, buddy punching. * Eliminate problems caused by lost IDs or forgotten passwords by using physiological attributes. For e.g. Prevent unauthorised use of lost, stolen or "borrowed" ID cards. * Reduce password administration costs. * Replace hard-to-remember passwords which may be shared or observed. * Integrate a wide range of biometric solutions and technologies, customer applications and databases into a robust and scalable control solution for facility and network access * Make it possible, automatically, to know WHO did WHAT, WHERE and WHEN! * Offer significant cost savings or increasing ROI in areas such as Loss Prevention or Time & Attendance. * Unequivocally link an individual to a transaction or event.

January-2007 [18]
1.
d) How is Dictionary attack different from Brute Force attack? [4]
brute force attack consists of trying every possible code, combination, or password until you find the right one.
In most cases, a dictionary attackwill work more quickly than a brute force attack. A brute force attack is, however, more certain to achieve results eventually than a dictionary attack.
In contrast with a brute force attack, where all possibilities are searched through exhaustively, a dictionary attack only tries possibilities which are most likely to succeed, typically derived from a list of words in a dictionary
5.
b) How does biometric help in security electronic banking? [8]
Internet banking is the provision of banking services over the internet that gives people the opportunity of easy access to their banking activities.
A security apparatus receives a biometric input from a user, which then is compared to a template to determine a correlation factor. The correlation factor, a fixed code and either a time-varying code or a challenge code then are combined to generate a token. The token is displayed to the user, who then enters the token at an access device. The access device is coupled to a secure host system. The access device forwards the token to the host, which processes the token to determine whether access is permitted. In one embodiment, the host is an electronic banking system. If access to such system is permitted the user is allowed to perform an electronic funds transfer. The security apparatus in one embodiment is an integrated circuit card. Each apparatus includes a sensor for detecting the holder's biometric information (i.e., voice, signature, fingerprint), along with a processor and display. The processor generates the token which then is displayed to the holder.
7.
a) How is Kerberos designed to provide strong authentication for client/server applications by using secret key cryptography? Also mention the short comings of Kerberos. [6]
July-2007 [4]
5.
a) What is biometrics and biometrics authentication? [4]
Biometrics is the science and technology of measuring and analyzing biological data. In information technology, biometrics refers to technologies that measure and analyze human body characteristics, such as fingerprints, eye retinas and irises, voice patterns, facial patterns and hand measurements, for authenticationpurposes. We can authenticate an identity in three ways: by something the user knows (such as a password or personal identification number), something the user has (a security token or smart card) or something the user is (a physical characteristic, such as a fingerprint, called a biometric). (For more on authentication, go to QuickStudy: Authentication.)
More Computerworld QuickStudies
Listen to the Computerworld Techcast
All three authentication mechanisms have drawbacks, so security experts routinely recommend using two separate mechanisms, a process called two-factor authentication. But implementing two-factor authentication requires expensive hardware and infrastructure changes. Therefore, security has most often been left to just a single authentication method.
Passwords are cheap, but most implementations offer little real security. Managing multiple passwords for different systems is a nightmare, requiring users to maintain lists of passwords and systems that are inevitably written down because they can't remember them. The short answer, talked about for decades but rarely achieved in practice, is the idea of single sign-on. .
Using security tokens or smart cards requires more expense, more infrastructure support and specialized hardware. Still, these used to be a lot cheaper than biometric devices and, when used with a PIN or password, offer acceptable levels of security, if not always convenience.
Biometric authentication has been widely regarded as the most foolproof - or at least the hardest to forge or spoof. Since the early 1980s, systems of identification and authentication based on physical characteristics have been available to enterprise IT. These biometric systems were slow, intrusive and expensive, but because they were mainly used for guarding mainframe access or restricting physical entry to relatively few users, they proved workable in some high-security situations. Twenty years later, computers are much faster and cheaper than ever. This, plus new, inexpensive hardware, has renewed interest in biometrics.
Types of Biometrics
A number of biometric methods have been introduced over the years, but few have gained wide acceptance.
Signature dynamics. Based on an individual's signature, but considered unforgeable because what is recorded isn't the final image but how it is produced -- i.e., differences in pressure and writing speed at various points in the signature.
Typing patterns. Similar to signature dynamics but extended to the keyboard, recognizing not just a password that is typed in but the intervals between characters and the overall speeds and pattern. This is akin to the way World War II intelligence analysts could recognize a specific covert agent's radio transmissions by his "hand" -- the way he used the telegraph key.

January-2008 [13]
1.
c) How is Dictionary Attack method different from Heuristic Attack method? [4]
A dictionary attack consists of trying "every word in the dictionary" as a possible password for an encrypted message.
A dictionary attack is generally more efficient than a brute force attack, because users typically choose poor passwords.
Dictionary attacks are generally far less successful against systems that use passphrases instead of passwords.
4.
b) How does biometrics facilitate the IT security efforts of the Financial institutions? [6]
Financial Institutions use Biometric Identification as an Additional Layer of SecurityA recent Meridian Research study projects that by 2006, financial institutions will lose $8 billion to identity theft - but the true cost is much higher. For every dollar lost, four more are spent identifying and prosecuting the criminal. If you have sensitive data either in-house or networked via the Internet, biometric fingerprint identification adds an additional layer of security to personal and financial information.In particular, financial institutions can use fingerprint biometric applications to help protect sensitive financial data from unauthorized users and increase customer confidence that personal information is secure. Unlike a username and password, a fingerprint cannot be easily stolen. CheckQ,a check fraud prevention system, and VaultQ, a safe-deposit access system, are two applications that illustrate the use of fingerprint biometrics in the financial industry. Read our whitepaper about biometrics for financial institutions.Protect your assets, your customers and your bottom lineBad checks and identity thieves aren’t the only security challenges banks and financial institutions face. If you’re still using only passwords and PINs to protect your premises and computers or to verify employee time and attendance, you’re taking chances with your bottom line.US Biometrics provides a full range of integrated solutions and services for digital identity applications including physical access control, network security, electronic transaction security and time/attendance. Every product in our solution suite is fully scalable, adapted specifically for business and compatible with leading business software and operating platforms.
5.
c) Explain the difference between authentication and identification. [3]
Authentication is the process by which a user establishes his or her identity when accessing a network application or service. In the identification phase, the server is informed of a user's identity, after which the server asks for authentication, or proof, that the user is who she says she is. Several identification processes consider a username and password combination to be an effective authentication proces
User Identification: In virtue of Data base server Identification tries to find whether you are registered user in the databaseor not. It applies to whole databaseserver.
Authentication: It applies for special Privileges for a user in the database. Like whether a user is authenticated to run system procedures or not. Or whether he is authenticated to create new users or not.
Posted by DOEACC NOTES at 2:09 AM0 comments

NMIS(NETWORK MANAGEMENT AND INFORMATION SYSTEMS)-CHAPTER 1

1. Introduction to Information Security

January-2004 [4]
1.
a) What are four problems related to network security? Explain the meaning of each of them. [4]

July-2004 [8]
1.
a) Differentiate between passive and active attacks on a computer. [4]
An attempt to subvert or bypass a system's security.
Attacks may be passive or active.
Passive attacks try to intercept or read data without changing it.A "passive attack" attempts to learn or make use of information from the system but does not affect system resources. Only involve monitoring of the information(Interception) leading to loss of confidentiality or traffic analysis(monitoring exchange of information without knowing precise contents) and are hard to prevent
Ex; Passive attacks are Interception: Attacks Confidentially Eaves Dropping, Man-in-the-middle attacks
Traffic analysis Attacks Confidentially or anonymity . Can include traceback on a network, CRT Radiation.
Active attacks attempt to alter or destroy data. An "active attack" attempts to alter system resources or affect their operation.They involve intervention of information(interception,modification and fabrication) flow and are easy to detect
interception : Attacks availability
modification : Attacks integrity
fabrication : Attacks authenticity
Ex; Active attacks are Trojan horses , Reworked code

b) What is malicious code? What are its different types? What differentiates one type from another? [4]
Malicious code (also called vandals) is a new breed of Internet threat that cannot be efficiently controlled by conventional antivirus software alone. In contrast to viruses that require a user to execute a program in order to cause damage, vandals are auto-executable applications.
we will classify malicious code into three areas [
23]:
A Virus is a self-replicating code segment which must be attached to a host executable. When the host is executed, the virus code may also execute. If possible, the virus will replicate by attaching a copy of itself to another executable. The virus may include an additional ``payload'' that triggers when specific conditions are met.
A Trojan horse is malicious code masquerading as a legitimate application. The goal of the code is to have the user believe they are conducting standard operations or running an innocuous application when in fact initiating its ulterior activities. There are many ways this attack manifests with the most frequent being reliance upon user naivety. A Trojan horse is similar to a virus, except a Trojan horse does not replicate.
A Worm is a self-replicating program. It is self-contained and does not require a host program. The program creates the copy and causes it to execute; no user intervention is required. Worms commonly utilize network services to propagate to other computer systems
January-2005 [4]
1.
a) List and describe three preventative measures that can be taken to minimize the risk of computer virus infection, other than the use of anti-virus software. [4]
The first thing that I recommend doing is to set Windows up to show file extensions. Windows is configured by default to hide the file extensions for known file types. A lot of virus authors take advantage of this by adding a false extension to an infected file. For example, if a virus was written in Visual Basic Script, it would have the .VBS extension. However, Windows knows the .VBS extension and therefore hides it. Many viruses use a filename like DOCUMENT.DOC.VBS. The idea is that since the .VBS is hidden, the user only sees the false extension .DOC, and assumes that the virus is a harmless document file.
Another step that you can take is to block file types that are potentially malicious. You can get away with because you’ve already asked the users what types of attachments they commonly receive.
First, you might begin by setting up a corporate policy that forbids users from bringing in any floppy disks or CDs from home. These foreign media could potentially carry viruses, and may also contain software that the company isn’t licenses to use. You might even go so far as to remove the floppy and CD drives from the workstations.
A firewall is a system that prevents unauthorized use and access to your computer. A firewall can be either hardware or software. Hardware firewalls provide a strong degree of protection from most forms of attack coming from the outside world and can be purchased as a stand-alone product or in broadband routers. good software firewall will protect your computer from outside attempts to control or gain access your computer, and usually provides additional protection against the most common Trojan programs or e-mail worms

July-2005 [16]
1.
d) Differentiate between passive and active attacks on a computer. [4]
4.
b) What are Trojans? Give example of at least one commonly known Trojan? [6]
Trojan horses, otherwise referred to as trojans, are simply programs that pretend to be something else. Trojan horses are impostors—files that claim to be something desirable but, in fact, are malicious. A very important distinction between Trojan horse programs and true viruses is that they do not replicate themselves. Trojan horses contain malicious code that when triggered cause loss, or even theft, of data. For a Trojan horse to spread, you must invite these programs onto your computers; for example, by opening an email attachment or downloading and running a file from the Internet. Trojan.Vundo is a Trojan horse
c) Differentiate between worms and viruses. [6]

Worms
Viruses
Wo rms are programs that replicate themselves from system to system without the use of a host file.
Worm doesn'tneed any host programs . Worm uses network flaws to spread for example: if u have a email with a virus. To activate that virus u have to double click it. But this not the case if its a worm.
Virus requires host
program to spread .
A worm is similar to a virus by its design, and is considered to be a sub-class of a virus. Worms spread from computer to computer, but unlike a virus, it has the capability to travel without any help from a person
A computer virus
attaches itself to a
program or file so it can spread from one computer to another, leaving infections as it travels
A worm takes advantage of file or information transport features on your system, which allows it to travel unaided

A worm is a type of virus that has an important and specific feature; it does not depend upon any form of human intervention to propagate. Since it can replicate and infect by itself, it is by far the most virulent type of virus, and can infect many millions of computers globally in a matter of hours.
A standard virus will
depend on some form
of human intervention to propagate, whether this is opening an email attachment, clicking a
malicious link, or
transferring an
infected disk from one
machine to another.
A virus copies itself around the system by gradually attaching the virus code to every common executable program available on the computer
A worm transfers
copies of itself across network links.
virus is a piece of program that attaches itself to the legitimate program , and it also modifies the host program & it need a host.
worm is a complete program that attaches itself to the legitimate program,but it doesnot
modifies the host
program

January-2006 [4]
1.
e) Differentiate between active and passive attacks on a computer. [4]

July-2006 [6]
2.
c) What is the difference between passive and active attacks with respect to security threats faced in using the web. [6]

January-2007 [15]
4.
a) How is a virus different from a worm? What are the various types of viruses? [8]
Computer viruses are generally defined as a program inputted into a computer that allows replication of the program installed. As it replicates, the program intentionally infects the computer, typically without even the user knowing about the damage being done. A virus, unlike worms or Trojan horses, needs an aid to transfer them to computers. Viruses usually take a large amount of computer memory, resulting into system crashes. Viruses are categorized to several parts based on its features.
Macro Viruses
A macro virus, often scripted into common application programs such as Word or Excel, is spread by infecting documents. Macro viruses are known to be platform-independent since the virus itself are written in language of the application and not the operating system. When the application is running, this allows the macro virus to spread amongst the operating systems. Examples of these viruses are: Melissa.A and Bablas. pc.
Network Viruses
Network viruses rapidly spreads through a Local Network Area (LAN), and sometimes throughout the internet. Generally, network viruses multiply through shared resources, i.e., shared drives and folders. When the virus infects a computer, it searches through the network to attack its new potential prey. When the virus finishes infecting that computer, it moves on to the next and the cycle repeats itself. The most dangerous network viruses are Nimda and SQLSlammer.
Logic Bombs
The logic bomb virus is a piece of code that are inputted into a software system. When a certain and specific condition is met, such as clicking on an internet browser or opening a particular file, the logic bomb virus is set off. Many programmers set the malicious virus off during days such as April Fools Day or Friday the 13th. When the virus is activated, then various activities will take place. For example, files are permanently deleted
Companion Viruses
Companion viruses takes advantage of MS-DOS. This virus creates a new file with typically the .COM extensions, but sometimes the .EXD extension as well. When a user manually types in a program they desire without adding .EXE or any other specific extention, DOS will make the assumption that the user want the file with the extension that comes first in alphabetical order, and thus running the virus. The companion virus is rare among Windows XP computers as this particular operating system does not use the MS-DOS.
Boot Sector Viruses
Boot sector viruses generally hide in the boot sector, either in the bootable disk or the hard drive. Unlike most viruses, this virus does not harm the files in the hard disk, but harm the hard disk itself. Boot sector viruses are uncommon at this day and age because these viruses are spread rapidly by floppy disks and not on CD-ROMs.
Multipartite Viruses
Multipartite viruses are spreaded through infected media and usually hides in the memory. Gradually, the virus moves to the boot sector of the hard drive and infects executable files on the hard drive and later across the computer system.
6.
a) What is Trojan Horse? Explain some functions of the Trojan. Also suggest any three ways to detect Trojan. [7]
Trojan Horses in the wild often contain spying functions (such as a packet sniffer) or
backdoor functions that allow a computer, unbeknownst to the owner, to be remotely
controlled from the network, creating a "zombie computer". Because Trojan horses often
have these harmful functions, there often arises the misunderstanding that such
functions define a Trojan Horse.
Trojans and backdoors typically setup a hidden server, from which a hacker with a client
can then log on to. They have become polymorphic, process injecting, prevention
disabling, easy to use and therefore abuse.
How do I detect them?
This is the best method to determine if your system has been compromised, but it requires that you:
A. have a basic understanding of the state of an "active connection" and
B. that you're familiar with the port numbers commonly used by the Trojans
Port scanning, traffic monitoring, process monitoring, any suspected activity shown on these
procedures can be sign of trojans.
Nearly all remote access trojans use TCP or UDP sockets, and in many cases trojans have a
default port that they listen to.
A simple netstat -a can reveal some trojans. However, you need some knowledge and
experience about TCP and services before you can get to the conclusion that your
systemis
infected.
Port scanning does have two distinct advantages - it can detect trojan ports even if the trojan
uses netstat stealth techniques, and it can be used both locally and remotely.Always keep in mind
that
Firewalls, routers and Intrusion Detection Systems (IDS) can affect the results of a port scan.TCPViewis a free utility by Sysinternals which not only lists the IP addresses communicating with
your
computer, it tells you what program is using that connection. Armed with this information you
can locate whatever program is sending data out of your machine and deal with it.

July-2007 [4]
1.
d) Briefly explain confidentiality, Integrity and Availability with respect to information security [4]
Attributes of Information Security: Confidentiality, Integrity, Availability
A key aspect of Information Security is to preserve the confidentiality, integrity and availability of an organisation's information. It is only with this information, that it can engage in commercial activities. Loss of one or more of these attributes, can threaten the continued existence of even the largest corporate entities.
Confidentiality. Assurance that information is shared only among authorised persons or organisations. Breaches of Confidentiality can occur when data is not handled in a manner adequate to safeguard the confidentiality of the information concerned. Such disclosure can take place by word of mouth, by printing, copying, e-mailing or creating documents and other data etc. The classification of the information should determine is confidentiality and hence the appropriate safeguards.
Integrity. Assurance that the information is authentic and complete. Ensuring that information can be relied upon to be sufficiently accurate for its purpose. The term Integrity is used frequently when considering Information Security as it is represents one of the primary indicators of security (or lack of it). The integrity of data is not only whether the data is 'correct', but whether it can be trusted and relied upon. For example, making copies (say by e-mailing a file) of a sensitive document, threatens both confidentiality and the integrity of the information. Why? Because, by making one or more copies, the data is then at risk of change or modification.
Availability. Assurance that the systems responsible for delivering, storing and processing information are accessible when needed, by those who need them.


No comments:

Post a Comment