Thursday, 4 December 2014

1. What is DHCP?
Ans: The Dynamic Host Configuration Protocol (DHCP) is a computer networking protocol used by devices (DHCP clients) to obtain configuration information for operation on a TCP/IP network. This protocol reduces system administration workload, allowing networks to add devices with little or no manual intervention.
DHCP consists of two components:-
1. A protocol that delivers host-specific configuration parameters from a DHCP server to a host.
2. A mechanism for the allocation of temporary or permanent network addresses to hosts.
IP requires the setting of many parameters within the protocol implementation software. Because IP can be used on many dissimilar kinds of network hardware, values for those parameters cannot be guessed at or assumed to have correct defaults. Thus DHCP supports three mechanisms for IP address allocation:-
1. Automatic allocation:-DHCP assigns a permanent IP address to the host.
2. Dynamic allocation:-DHCP assigns an IP address for a limited period of time. Such a network address is called a lease. This is the only mechanism that allows automatic reuse of addresses that are no longer needed by the host to which it was assigned.
3. Manual/Static allocation:-The host's address is assigned by a network administrator.
DHCP operations fall into four basic phases: IP discovery, IP lease offer, IP request, and IP lease acknowledgment.
IP discovery: The client broadcasts messages on the physical subnet to discover available DHCP servers. Network administrators can configure a local router to forward DHCP packets to a DHCP server on a different subnet. This client-implementation creates a User Datagram Protocol (UDP) packet with the broadcast destination of 255.255.255.255 or the specific subnet broadcast address.
IP lease offer: When a DHCP server receives an IP lease request from a client, it reserves an IP address for the client and extends an IP lease offer by sending a DHCPOFFER message to the client. This message contains the client's MAC address, the IP address that the server is offering, the subnet mask, the lease duration, and the IP address of the DHCP server making the offer.   
IP request: A client can receive DHCP offers from multiple servers, but it will accept only one DHCP offer and broadcast a DHCP request message.
IP lease acknowledgment: When the DHCP server receives the DHCPREQUEST message from the client, the configuration processes enters its final phase. The acknowledgement phase involves sending a DHCPACK packet to the client. This packet includes the lease duration and any other configuration information that the client might have requested. At this point, the IP configuration process is complete.
           After the client obtains an IP address, the client may use the Address Resolution Protocol (ARP) to prevent IP conflicts caused by overlapping address pools of DHCP servers.
2. Give the IP address 126.110.16.7 what class of address is it?
Ans:  Since first byte is 126 so this IP address belongs to class A. Range of class A for the first byte is 0-127.i.e its MSB is always 0.
5.  Given a net-mask of 255.255.255.0, how many subnets are available?
Ans:  Its 65,536 for     –    Class A
    126 for        —    Class B
    1 for        --    Class C
8. Describe client-server model? Give example
Ans: Client-server computing or networking is a distributed application architecture that partitions tasks or work loads between service providers (servers) and service requesters, called clients. Often clients and servers operate over a computer network on separate hardware. A server machine is a high-performance host that is running one or more server programs which share its resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.
Client-server describes the relationship between two computer programs in which one program, the client program, makes a service request to another, the server program. Standard networked functions such as email exchange, web access and database access, are based on the client-server model.






Each instance of the client software can send data requests to one or more connected servers. In turn, the servers can accept these requests, process them, and return the requested information to the client. Although this concept can be applied for a variety of reasons to many different kinds of applications, the architecture remains fundamentally the same.
The most basic type of client-server architecture employs only two types of hosts: clients and servers. This type of architecture is sometimes referred to as two-tier. It allows devices to share files and resources. The two tier architecture means that the client acts as one tier and application in combination with server acts as another tier.
The interaction between client and server is often described using sequence diagrams. Sequence diagrams are standardized in the Unified Modeling Language.
Specific types of clients include web browsers, email clients, and online chat clients.
Specific types of servers include web servers, ftp servers, application servers, database servers, name servers, mail servers, file servers, print servers, and terminal servers. Most web services are also types of servers.
Advantages
1) In most cases, a client -server architecture enables the roles and responsibilities of a computing system to be distributed among several independent computers that are known to each other only through a network. This creates an additional advantage to this architecture: greater ease of maintenance. For example, it is possible to replace, repair, upgrade, or even relocate a server while its clients remain both unaware and unaffected by that change.
2) All data is stored on the servers, which generally have far greater security controls than most clients. Servers can better control access and resources, to guarantee that only those clients with the appropriate permissions may access and change data.
3) Since data storage is centralized, updates to that data are far easier to administer than what would be possible under a P2P paradigm. Under a P2P architecture, data updates may need to be distributed and applied to each peer in the network, which is both time-consuming and error-prone, as there can be thousands or even millions of peers.
4) Many mature client-server technologies are already available which were designed to ensure security, friendliness of the user interface, and ease of use
5) It functions with multiple different clients of different capabilities.
Disadvantages
Traffic congestion on the network has been an issue since the inception of the client-server paradigm. As the number of simultaneous client requests to a given server increases, the server can become overloaded. In P2P network, where its aggregated bandwidth actually increases as nodes are added, since the P2P network's overall bandwidth can be roughly computed as the sum of the bandwidths of every node in that network.
The client-server paradigm lacks the robustness of a good P2P network. Under client-server, should a critical server fail, clients’ requests cannot be fulfilled. In P2P networks, resources are usually distributed among many nodes. Even if one or more nodes depart and abandon a downloading file, for example, the remaining nodes should still have the data needed to complete the download
EXAMPLES
A web browser is a client program at the user computer that may access information at any web server in the world. To check your bank account from your computer, a web browser client program in your computer forwards your request to a web server program at the bank. That program may in turn forward the request to its own database client program that sends a request to a database server at another bank computer to retrieve your account balance. The balance is returned to the bank database client, which in turn serves it back to the web browser client in your personal computer, which displays the information for you.
The client-server model has become one of the central ideas of network computing. Many business applications being written today use the client-server model. So do the Internet's main application protocols, such as HTTP, SMTP, Telnet, DNS.
      3)  In marketing, the term has been used to distinguish distributed computing by    smaller dispersed computers from the "monolithic" centralized computing of mainframe    computers. But this distinction has largely disappeared as mainframes and their applications have also turned to the client-server model and become part of network computing.
16. Write short notes on ARP and RARP.
Ans:  ARP: Address Resolution Protocol (ARP) is a dynamic mapping protocol that finds the hardware address of a host from a known IP address. Whenever a host or a router has an IP datagram to send to another host or router it has the logical (IP) address of the receiver. This IP datagram must be encapsulated in a frame to be able to pass through the physical network for which the sender needs the physical address of the receiver.
The host or the router then sends an ARP query packet. This packet includes the physical and IP address of the sender and the IP address of the receiver. Because the sender does not know the physical address of the receiver, the query is broadcast over the network. Every host or router on the network receives and processes the ARP query packet, but only the intended recipient recognizes its IP address and sends back an ARP response packet. The response packet contains the recipient’s IP and physical address. The packet is unicast directly to the inquirer by using the physical address received in the query packet.
RARP: Reverse Address Resolution Protocol (RARP) finds the logical (IP) address for a machine that knows only its physical address. When an IP machine happens to be a diskless machine, it has no way of initially knowing its IP address. But it does know its physical (MAC) address. RARP discovers the identity of the IP address for diskless machines by sending out a packet that includes its MAC address and a request for the IP address assigned to that MAC address over the local network. A designated machine called a RARP server that knows all the IP addresses will respond with an RARP reply and the identity crisis is over. The requesting machine must be running a RARP client program; the responding machine must be running a RARP server program.     
22. Write about working of ping, telnet and ftp.
Answer:
Ping: The program sends an ICMP echo request message to a host, expecting an ICMP echo reply to be returned.
We call the ping program that sends the echo requests the client, and the host being pinged the server. Most TCP/IP implementations support the Ping server directly in the kernel— the server is not a user process.
Figure 7.1 shows the ICMP echo request and echo reply messages
      
     0                 78          15 16         31
Type(0 or 8)
Code(0)
Checksum
Identifier
Sequence Number
Optional data

Figure:. Format of ICMP message for echo request and echo reply.
As with other ICMP query messages, the server must echo the identifier and sequence number fields. Also, any optional data sent by the client must be echoed. These are presumably of interest to the client.
Unix implementations of ping set the identifier field in the ICMP message to the process ID of the sending process. This allows ping to identify the returned responses if there are multiple instances of ping running at the same time on the same host.
The sequence number starts at 0 and is incremented every time a new echo request is sent. ping prints the sequence number of each returned packet, allowing us to see if packets are missing, reordered, or duplicated. IP is a best effort datagram delivery service, so any of these three conditions can occur.
Historically the ping program has operated in a mode where it sends an echo request once a second, printing each echo reply that is returned. Newer implementations, however, require the -s option to operate this way. By default, these newer implementations send only a single echo request and output "host is alive" if an echo reply is received, or "no answer" if no reply is received within 20 seconds.
Telnet: Telnet was designed to work between any host (i.e., any operating system) and any terminal. Its specification in RFC 854 the lowest common denominator terminal, called the network virtual terminal (NVT). The NVT is an imaginary device from which both ends of the connection, the client and server, map their real terminal to and from. That is, the client operating system must map whatever type of terminal the user is on to the NVT. The server must then map the NVT into whatever terminal type the server supports.
There are four modes of operation for most Telnet clients and servers.
Half-duplex: This is the default mode, but it is rarely used today. The default NVT is a half-duplex device that requires a GO AHEAD command (GA) from the server before accepting user input. The user input is echoed locally from the NVT keyboard to the NVT printer so that only completed lines are sent from the client to the server.
While this provides the lowest common denominator for terminal support, it doesn't adequately handle full-duplex terminals communicating with hosts that support full-duplex communications, which is the norm today. RFC 857 defines the ECHO option and RFC 858 defines the SUPPRESS GO AHEAD option. The combination of these two options provides support for the next mode, character at a time, with remote echo.
Character at a time: This is what we saw with Rlogin. Each character we type is sent by itself to the server. The server echoes most characters, unless the application on the server turns echoing off.
The problems with this mode are perceptible delays in echoing across long delay networks and the high volume of network traffic. Nevertheless, we'll see this is the common default for most implementations today.
We'll see that the way to enter this mode is for the server to have the SUPPRESS GO AHEAD option enabled. This can be negotiated by having the client send a DO SUPPRESS GO AHEAD (asking to enable the option at the server), or the server sending a WILL SUPPRESS GO AHEAD to the client (asking to enable the option itself). The server normally follows this with a WILL ECHO, asking to do the echoing.
Line at a time: This is often called "kludge line mode," because its implementation comes from reading between the lines in RFC 858. This RFC states that both the ECHO and SUPPRESS GO AHEAD options must be in effect to have character-at-a-time input with remote echo. Kludge line mode takes this to mean that when either of these options is not enabled, Telnet is in a line-at-a-time mode. In the next section we'll see an example of how this mode is negotiated, and how it is disabled when a program that needs to receive every keystroke is run on the server.
4.Line-mode: We use this term to refer to the real line-mode option, defined in RFC 1184 . This option is negotiated between the client and server and corrects all the deficiencies in the kludge line mode. Newer implementations support this option.
FTP: It is the Internet standard for file transfer. The file transfer provided by FTP copies a complete file from one system to another system. To use FTP we need an account to login to on the server, or we need to use it with a server that allows anonymous FTP.
FTP differs from the other applications that we've described because it uses two TCP connections to transfer a file.
The control connection is established in the normal client–server fashion. The server does a passive open on the well-known port for FTP (21) and waits for a client connection. The client does an active open to TCP port 21 to establish the control connection. The control connection stays up for the entire time that the client communicates with this server. This connection is used for commands from the client to the server and for the server's replies.
The IP type-of-service for the control connection should be "minimize delay" since the commands are normally typed by a human user.
A data connection is created each time a file is transferred between the client and server.
The IP type-of-service for the data connection should be "maximize throughput" since this connection is for file transfer.
Figure below shows the arrangement of the client and server and the two connections between them.
.

Figure: Processes involved in file transfer
  23. What is Virtual Private Network (VPN) and how does it work?
Ans: Virtual Private Network
Virtual private network (VPN) is a technology that is gaining popularity among large organizations that use the global Internet for both intra- and inter-organization communication, but require privacy in their internal communications. We discuss VPN here because it uses the IPSec Protocol to apply security to the IP data-grams.
                                      
                                                        Figure16: Virtual private network
Tunneling To guarantee privacy and other security measures for an organization, VPN can use the IPSec in the tunnel mode. In this mode, each IP datagram destined for private use in the organization is encapsulated in another datagram. To use IPSec in tunneling, the VPNs need to use two sets of addressing, as shown in Figure 17
                     
Figure 17: Addressing in a VPN

The public network (Internet) is responsible for carrying the packet from R1 to R2. Outsiders cannot decipher the contents of the packet or the source and destination addresses. Deciphering takes place at R2, which finds the destination address of the packet and delivers it?
24. How does routing work?
Answer:
A TCP/IP network usually interconnects a number of hosts. Your UnixWare® host is connected to a TCP/IP network via a hardware network interface. Individual TCP/IP networks are in turn interconnected via IP routers. IP routers forward IP packets from one TCP/IP network to another and exchange routing information with each other to deliver packets across a number of networks. Other types of routers may forward traffic for protocol families other than TCP/IP.
If all of the hosts at your site are connected to a single TCP/IP network that is not interconnected with any other TCP/IP networks, an IP router is unnecessary. If your site comprises many TCP/IP networks, or if you want to interconnect your network with other TCP/IP networks, you must configure the interconnections with IP routers so that all hosts can communicate.
Many types of machines may serve as IP routers. A number of vendors offer machines dedicated entirely to the function of IP routing. A system may act both as a host (offering network services such as remote login) and a router.
The IP routing mechanisms consist of:
a routing table inside the operating system kernel
routing daemons
routing protocols and configuration files that implement the routing protocols
configuration, and troubleshooting utilities
Routers manage network traffic by maintaining routing tables. These routing tables contain information that specifies which networks and hosts can be reached by which routes. A routing table entry can be either static or dynamic.
On a more complex network, that is, a network in which a router connects a local network to other routers and gateways, configure the router to use dynamic routing tables. Dynamic routing tables allow the router to route traffic to the most current gateway destinations.
A router can build and maintain a dynamic routing table by running a routing daemon such as routed or gated. The routing daemon manages its routing table by exchanging routing information with gateways and other routers. When routed runs on a router, it broadcasts its routing table and listens for broadcasts from other directly connected routers. It continually updates its routing table based on those broadcasts. A routing daemon that both broadcasts its routing tables and listens for broadcasts from other routers is termed ``active''.
A routing daemon can also run on a host that itself is not a router. In this case, the daemon is configured to listen for broadcasts and update its local routing table; it does not broadcast to other machines. This is termed ``passive''. When a machine can send an IP packet to another machine without going through a third machine, the route the packet will travel is said to be a ``direct route''. The selection of that route is called ``direct routing''. In ``Example internet work'', the machine Columbia can trace a direct route to the machines seine, Thames and Volga on the 172.16.1 network. Columbia cannot reach Rome, London or Paris directly.

VPN Technology
VPN technology uses IPSec in the tunnel mode to provide authentication, integrity, and privacy.


26. What is multitasking and multithreading?
Ans:
Multitasking is the feature supported by the operating system to execute more than one task simultaneously.
Multithreading is the system environment where the tasks are sharing the same programs load module under the multitasking environment. It is a subset of multitasking since it concerns tasks which use the same program.
          In simple words we can also answer as more than one task execute concurrently is called multitasking and one tasks can be used by more than one program is called multithreading.
          Multitasking means that a program running in a single partition or region allows multiple tasks to execute simultaneously. A task is an execution of a program or programs for a specific user example: if user 1 is running PGM A then user1 has created a task.
Multithreading is the system environment where the tasks are sharing the same program under the multitasking environment. Multithreading is a subset of multitasking, since it concern tasks which use the same program.
    Under the multithreading environment, a program is shared by several tasks concurrently. For each task, the program must work as if it were executing instructions exclusively for each task. Therefore, it requires special considerations such as reentrancy.
27. Explain the protocol SMTP. Give a brief comparison between SMTP and HTTP.
Ans: SMTP (simple mail transfer protocol): The actual mail transfer is done through message transfer agents. To send mail, a system  must have the client MTA, and to receive mail, a system must have a server MTA. The formal protocol that defines the MTA client and server in the Internet is called the Simple Mail Transfer Protocol (SMTP). As we said before, two pairs of MTA client/server programs are used in the most common situation. Figure1 shows the range of the SMTP protocol in this scenario.
                        
                                            Figure1 SMTP range

SMTP is used two times, between the sender and the sender's mail server and between the two mail servers. As we will see shortly, another protocol is needed between the mail server and the receiver.
SMTP simply defines how commands and responses must be sent back and forth. Each network is free to choose a software package for implementation. We discuss the mechanism of mail transfer by SMTP in the remainder of the section.
Commands and Responses
SMTP uses commands and responses to transfer messages between an MTA client and an MTA server
                              .
                                                                       Figure2
Each command or reply is terminated by a two-character end-of-line token. Commands are sent from the client to the server and Responses are sent from the server to the client.
Comparison between SMTP and HTTP
1. Similarities between SMTP and HTTP 
   (a)           Both are message transfer protocols.
   (b)          Data transfer between client-server in both SMTP and HTTP are same.
   (c)           Both uses command-response technique to transfer data between client-server.
   (d)           Both uses TCP port to transfer data.
2. Differences between SMTP and HTTP
Ans: HTTP is a hyper text transfer protocol on the other hand SMTP is a simple mail transfer protocol.
HTTP is combine protocol it is a combination of SMTP and FTP protocol but SMTP is not a combine protocol.

In HTTP messages are transferred immediately but in SMTP messages are first stored and then forwarded.
HTTP protocol is used mainly to access data on World Wide Web so it is a special type of protocol but SMTP is not.
SMTP is push protocol i.e. it pushes the message from client to the server, but HTTP is not a push protocol.       


30. What do you mean by multiple access in wireless networks?
Ans: Wireless networks are multiuser systems in which information is conveyed by means of radio waves. In a multiuser environment, access coordination can be accomplished via several mechanisms: by insulating the various signals sharing the same access medium, by allowing the signals to contend for the access, or by combining these two approaches. The choice for the appropriate scheme must take into account a number of factors, such as type of traffic under consideration, available technology, cost, complexity. Signal insulation is easily attainable by means of a scheduling procedure in which signals are allowed to access the medium according to a predefined plan. Signal contention occurs exactly because no signal insulation mechanism is used. Access coordination may be carried out in different domains: the frequency domain, time domain, code domain, and space domain. Signal insulation in each domain is attained by splitting the resource available into non overlapping slots (frequency slot, time slot, code slot, and space slot) and assigning each signal a slot. Four main multiple access technologies are used by the wireless networks: frequency division multiple access (FDMA), time division multiple access (TDMA), code division multiple access (CDMA), and space division multiple access (SDMA).
Frequency Division Multiple Access
FDMA is certainly the most conventional method of multiple access and was the first technique to be employed in modern wireless applications. In FDMA, the available bandwidth is split into a number of equal subbands, each of which constitutes a physical channel. The channel bandwidth is a function of the services to be provided and of the available technology and is identified by its center frequency, known as a carrier. In single channel per carrier FDMA technology, the channels, once assigned, are used on a non-time-sharing basis. Thus, a channel allocated to a given user remains allocated until the end of the task for which that specific assignment was made.
Time Division Multiple Access
TDMA is another widely known multiple-access technique and succeeded FDMA in modern wireless applications. In TDMA, the entire bandwidth is made available to all signals but on a time-sharing basis. In such a case, the communication is carried out on a buffer-and-burst scheme so that the source information is first stored and then transmitted. Prior to transmission, the information remains stored during a period of time referred to as a frame. Transmission then occurs within a time interval known as a (time) slot. The time slot constitutes the physical channel.
Code Division Multiple Access
CDMA is a non conventional multiple-access technique that immediately found wide application in modern wireless systems. In CDMA, the entire bandwidth is made available simultaneously to all signals. In theory, very little dynamic coordination is required, as opposed to FDMA and TDMA in which frequency and time management have a direct impact on performance. To accomplish CDMA systems, spread-spectrum techniques are used. (Appendix C introduces the concept of spread spectrum.)
In CDMA, signals are discriminated by means of code sequences or signature sequences, which correspond to the physical channels. Each pair of transmitter–receivers is allotted one code sequence with which a communication is established. At the reception side, detection is carried out by means of a correlation operation. Ideally, the best performance is attained with zero cross correlation codes, i.e., with orthogonal codes. In theory, for a synchronous system and for equal rate users, the number of users within a given bandwidth is dictated by the number of possible orthogonal code sequences. In general, CDMA systems operate synchronously in the forward direction and asynchronously in the reverse direction. The point-to-multipoint characteristic of the down link facilitates the synchronous approach, because one reference channel, broadcast by the base station, can be used by all mobile stations within its service area for synchronization purposes. On the other hand, the implementation of a similar feature on the reverse link is not as simple because of its multi-point-to-point transmission characteristic. In theory, the use of orthogonal codes eliminates the multiple-access interference. Therefore, in an ideal situation, the forward link would not present multiple-access interference. The reverse link, in turn, is characterized by multiple-access interference. In practice, however, interference still occurs in synchronous systems, because of the multi path propagation and because of the other-cell signals. The multi-path phenomenon produces delayed and attenuated replicas of the signals, with these signals then losing the synchronism and, therefore, the orthogonality. The other-cell signals, in turn, are not time-aligned with the desired signal. Therefore, they are not orthogonal with the desired signal and may cause interference.
Channels in the forward link are identified by orthogonal sequences, i.e., channelization in the forward link is achieved by the use of orthogonal codes. Base stations are identified by pseudonoise (PN) sequences. Therefore, in the forward link, each channel uses a specific orthogonal code and employs a PN sequence modulation, with a PN code sequence specific to each base station. Hence, multiple access in the forward link is accomplished by the use of spreading orthogonal sequences. The purpose of the PN sequence in the forward link is to identify the base station and to reduce the interference. In general, the use of orthogonal codes in the reverse link finds no direct application, because the reverse link is intrinsically asynchronous. Channelization in the reverse link is achieved with the use of long PN sequences combined with some private identification, such as the electronic serial number of the mobile station. Some systems, on the other hand, implement some sort of synchronous transmission on the reverse link. In such a case, orthogonal codes may also be used with channelization purposes in the reverse link.
Several PN sequences are used in the various systems, and they will be detailed for the several technologies. Two main orthogonal sequences are used in all CDMA systems: Walsh codes and orthogonal variable spreading functions (OVSF) (see Appendix C).
Space Division Multiple Access
SDMA is a nonconventional multiple-access technique that finds application in modern wireless systems mainly in combination with other multiple-access techniques. The spatial dimension has been extensively explored by wireless communications systems in the form of frequency reuse. The deployment of advanced techniques to take further advantage of the spatial dimension is embedded in the SDMA philosophy. In SDMA, the entire bandwidth is made available simultaneously to all signals. Signals are discriminated spatially, and the communication trajectory constitutes the physical channels. The implementation of an SDMA architecture is based strongly on antennas technology coupled with advanced digital signal processing. As opposed to the conventional applications in which the locations are constantly illuminated by rigid-beam antennas, in SDMA the antennas should provide for the ability to illuminate the locations in a dynamic fashion. The antenna beams must be electronically and adaptively directed to the user so that, in an idealized situation, the location alone is enough to discriminate the user.
FDMA and TDMA systems are usually considered to be narrowband, whereas CDMA systems are usually designed to be wideband. SDMA systems are deployed together with the other multiple-access technologies.
Q.33 Explain the architecture of I.E.E.E. 802.11 wireless LAN.
ANS: I.E.E.E. 802.11 wireless LAN covers the physical and data link layers.
 Architecture
  There are two kinds: the basic service set (BSS) and the extended service set (ESS)
Basic Service Set:
I.E.E.E. 802.11 defines the basic service set (BSS) as the building block of wireless LAN. A basic service set is made of stationary or mobile wireless stations and an optimal central base station, known as the access point (AP).
Fig 33.1 shows two sets in this standard

                                   Ad hoc network                                                             Infrastructure
                               (BSS without an AP)                                                          (BSS with an AP)
The BSS without an AP is stand-alone network and cannot send data to other BSSs. It is called an ad hoc architecture.  In this architecture, stations can form a net work without the need of an AP;they can locate one another and agree to be part of a BSSs. A BSSs with an AP is sometimes reffered as infrastructure network .

Extended Service Set:
An extended service set (ESS) is made up of two or more BSSs with APs. In this case, the BSSs are connected through a distributed system, which is usually a wired LAN. The distributed system connects the APs in the BSSs. IEEE802.11 can be any IEEE LAN such as Ethernet. The mobile stations are normalized inside a BSS. The stationary stations AP stations that are part of a wired LAN. Fig 33.2 shows an ESS.
    

39.  What is Tunneling?
Ans: Tunneling is a strategy used when two computers using IPv6 want to communicate with each other and the packet must pass through a region that uses IPv4. To pass through this region, the packet must have an IPv4 address. So, the IPv6 packet is encapsulated in an IPv4 packet when it enters the region, and it leaves its capsule when it exits the region. It seems as if the IPv6 packet goes through a tunnel at one end and emerges at the other end. The protocol value is set to 41 to make it clear that the IPv4 packet is carrying an IPv6 packet as data.
Tunneling is shown in fig. below-

      




40.  List the components of a Simple Mobile IP Deployment. Explain each Mobile IP Components with clear diagram.
Ans: The figure shows the components of Mobile IPv6. The components of Mobile IPv6 are the following:
Home Link
This is the link that is assigned the home subnet prefix, from which the mobile node obtains its home address. The home agent resides on the home link.


Home Address
An address assigned to the mobile node when it is attached to the home link and through which the mobile node is always reachable, regardless of its location on an IPv6 network. If the mobile node is attached to the home link, Mobile IPv6 processes are not used and communication occurs normally. If the mobile node is away from home (not attached to the home link), packets addressed to the mobile node's home address are intercepted by the home agent and tunneled to the mobile node's current location on an IPv6 network. Because the mobile node is always assigned the home address, it is always logically connected to the home link.
Home Agent
A router on the home link that maintains registrations of mobile nodes that are away from home and the different addresses that they are currently using. If the mobile node is away from home, it registers its current address with the home agent, which tunnels data sent to the mobile node's home address to the mobile node's current address on an IPv6 network and forwards tunneled data sent by the mobile node.
Although the figures in this report show the home agent as the router connecting the home link to an IPv6 network, the home agent does not have to serve this function. The home agent can also be a node on the home link that does not perform any forwarding when the mobile node is at home.
Mobile Node
It is an IPv6 node that can change links, and therefore addresses, and maintain reach ability using its home address. A mobile node has awareness of its home address and the global address for the link to which it is attached (known as the care-of address), and indicates its home address/care-of address mapping to the home agent and Mobile IPv6-capable nodes with which it is communicating.
Foreign Link
This is a link that is not the mobile node's home link.
Foreign Agent
A foreign agent is a router that stores information about mobile nodes visiting its network. Foreign agents also advertise care-of-addresses which are used by Mobile IP.


Care-of address
An address used by a mobile node while it is attached to a foreign link. For stateless address configuration, the care-of address is a combination of the foreign subnet prefix and an interface ID determined by the mobile node. A mobile node can be assigned multiple care-of addresses; however, only one care-of address is registered as the primary care-of address with the mobile node's home agent. The association of a home address with a care-of address for a mobile node is known as a binding. Correspondent nodes and home agents keep information on bindings in a binding cache.
Correspondent Node
This is an IPv6 node that communicates with a mobile node. A correspondent node does not have to be Mobile IPv6-capable. If the correspondent node is Mobile IPv6-capable, it can also be a mobile node that is away from home.

41. Describe the working of mobile IP.
Ans:
Mobile IP supports mobility by transparently binding the home address of the mobile node with its care-of address. This mobility binding is maintained by some specialized routers known as mobility agents. Mobility agents are of two types - home agents and foreign agents. The home agent, a designated router in the home network of the mobile node, maintains the mobility binding in a mobility binding table where each entry is identified by the tuple <permanent home address, temporary care-of address, association lifetime>. Figure 1 shows a mobility binding table. The purpose of this table is to map a mobile node's home address with its care-of address and forward packets accordingly.

        Figure 1: Mobility Binding Table                                      Figure 2: Visitor List
Foreign agents are specialized routers on the foreign network where the mobile node is currently visiting. The foreign agent maintains a visitor list which contains information about the mobile nodes currently visiting that network. Each entry in the visitor list is identified by the tuple: < permanent home address, home agent address, media address of the mobile node, association lifetime>. Figure 2 shows an instance of a visitor list.
In a typical scenario, the care-of address of a mobile node is the foreign agent's IP address.
The basic Mobile IP protocol has four distinct stages. These are:
Agent Discovery: Agent Discovery consists of the following steps:
Mobility agents advertise their presence by periodically broadcasting Agent Advertisement messages. An Agent Advertisement message lists one or more care-of addresses and a flag indicating whether it is a home agent or a foreign agent.
The mobile node receiving the Agent Advertisement message observes whether the message is from its own home agent and determines whether it is on the home network or a foreign network.
If a mobile node does not wish to wait for the periodic advertisement, it can send out Agent Solicitation messages that will be responded by a mobility agent.
Registration: Registration consists of the following steps:
If a mobile node discovers that it is on the home network, it operates without any mobility services.
If the mobile node is on a new network, it registers with the foreign agent by sending a Registration Request message which includes the permanent IP address of the mobile host and the IP address of its home agent.
The foreign agent in turn performs the registration process on behalf of the mobile host by sending a Registration Request containing the permanent IP address of the mobile node and the IP address of the foreign agent to the home agent.
When the home agent receives the Registration Request, it updates the mobility binding by associating the care-of address of the mobile node with its home address.
The home agent then sends an acknowledgement to the foreign agent.
The foreign agent in turn updates its visitor list by inserting the entry for the mobile node and relays the reply to the mobile node.
Figure 3 illustrates the registration process.

Figure 3: Registration process in Mobile IP
In Service: This stage can be subdivided into the following steps:
When a correspondent node wants to communicate with the mobile node, it sends an IP packet addressed to the permanent IP address of the mobile node.
The home agent intercepts this packet and consults the mobility binding table to find out if the mobile node is currently visiting any other network.
The home agent finds out the mobile node's care-of address and constructs a new IP header that contains the mobile node's care-of address as the destination IP address. The original IP packet is put into the payload of this IP packet. It then sends the packet. This process of encapsulating one IP packet into the payload of another is known as IP-within-IP encapsulation, or tunneling.
When the encapsulated packet reaches the mobile node's current network, the foreign agent decapsulates the packet and finds out the mobile node's home address. It then consults the visitor list to see if it has an entry for that mobile node.
If there is an entry for the mobile node on the visitor list, the foreign agent retrieves the corresponding media address and relays it to the mobile node.
When the mobile node wants to send a message to a correspondent node, it forwards the packet to the foreign agent, which in turn relays the packet to the correspondent node using normal IP routing.
The foreign agent continues serving the mobile node until the granted lifetime expires. If the mobile node wants to continue the service, it has to reissue the Registration Request.
Figure 4 illustrates the tunneling operation.

Figure 4: Tunneling operation in Mobile IP
Deregistration: If a mobile node wants to drop its care-of address, it has to deregister with its home agent. It achieves this by sending a Registration Request with the lifetime set to zero. There is no need for deregistering with the foreign agent as registration automatically expires when lifetime becomes zero. However if the mobile node visits a new network, the old foreign network does not know the new care-of address of the mobile node. Thus datagrams already forwarded by the home agent to the old foreign agent of the mobile node are lost.
42. What are the features of server? Explain the types of Web-Server.
Ans:   The common features of server are: 
Virtual hosting to serve many web sites using one IP address.
 Large file support to be able to serve files whose size is greater than 2 GB on 32 bit OS.
Bandwidth throttling to limit the speed of responses in order to not saturate the network and to be able to serve more clients
       The different types of Web-Server are as follows:
Server Platforms
A term often used synonymously with operating system, a platform is the underlying hardware or software for a system and is thus the engine that drives the server.
Application Servers
Sometimes referred to as a type of middleware, application servers occupy a large chunk of computing territory between database servers and the end user, and they often connect the two. Middleware is a software that connects two otherwise separate applications For example, there are a number of middleware products that link a database system to a Web server This allows users to request data from the database using forms displayed on a Web browser and it enables the Web server to return dynamic Web pages based on the user's requests and profile.
Audio/Video Servers
Audio/Video servers bring multimedia capabilities to Web sites by enabling them to broadcast streaming multimedia content. Streaming is a technique for transferring data such that it can be processed as a steady and continuous stream.
Chat Servers
Chat servers enable a large number of users to exchange information in an environment similar to Internet newsgroups that offer real-time discussion capabilities.
Fax Servers
A fax server is an ideal solution for organizations looking to reduce incoming and outgoing telephone resources but that need to fax actual documents.
FTP Servers
One of the oldest of the Internet services, File Transfer Protocol makes it possible to move one or more files securely between computers while providing file security and organization as well as transfer control.
Groupware Servers
A groupware server is software designed to enable users to collaborate, regardless of location, via the Internet or a corporate intranet and to work together in a virtual atmosphere.
IRC Servers
An option for those seeking real-time discussion capabilities, Internet Relay Chat consists of various separate networks (or "nets") of servers that allow users to connect to each other via an IRC network.
List Servers
List servers offer a way to better manage mailing lists, whether they be interactive discussions open to the public or one-way lists that deliver announcements, newsletters, or advertising.
Mail Servers
Almost as ubiquitous and crucial as Web servers, mail servers move and store mail over corporate networks (via LANs and WANs) and across the Internet.
News Servers
News servers act as a distribution and delivery source for the thousands of public news groups currently accessible over the USENET news network.
Proxy Servers
Proxy servers sit between a client program (typically a Web browser) and an external server (typically another server on the Web) to filter requests, improve performance, and share connections.
Telnet Servers
A Telnet server enables users to log on to a host computer and perform tasks as if they're working on the remote computer itself.
Web Servers
At its core, a Web server serves static content to a Web browser by loading a file from a disk and serving it across the network to a user's Web browser. This entire exchange is mediated by the browser and server talking to each other using HTTP. Also read ServerWatch's Web Server Basics article.



43. How does HTTP works?
Ans:
              HTTP, the Hypertext Transfer Protocol, is the application-level protocol that is used to transfer data on the Web. HTTP comprises the rules by which Web browsers and servers exchange information. Although most people think of HTTP only in the context of the World-Wide Web, it can be, and is, used for other purposes, such as distributed object management systems.
           HTTP Is a request-response protocol. For example, a Web browser initiates a request to a server, typically by opening a TCP/IP connection. The request itself comprises

o a request line,

o a set of request headers, and

o an entity.

The server sends a response that comprises

o a status line,

o a set of response headers, and

o an entity.

The entity in the request or response can be thought of simply as the payload, which may be binary data. The other items are readable ASCII characters. When the response has been completed, either the browser or the server may terminate the TCP/IP connection, or the browser can send another request.
44. Explain the architecture of apache web server.
Ans:
The function of a web server is to service requests made through HTTP protocol. Typically the server receive a request asking for a specific resource and returns the resource as a response. A client might reference in its request a file, and then that file is returned or, for example, a directory and then the content of that directory (codified in some suitable form) is returned. A client might also request a program, and it is the web server task to launch that program and to return the output of that program to the client. Various other resources might be referenced in client's request.
To summarize: the web server takes a request, decodes it, obtains the resource and hands it to the client.
Additional concerns related to controlling access authorization and client’s authorizations are also in the responsibility of the web server. As has been said the web server might execute programs as response to clients requests. It must ensure that this is not a threat for the host system. In addition, the web server must be capable, not only to respond to a high rate of requests, but also to satisfy a request as quickly as possible.
Figure illustrates the high level conceptual architecture. There is a core part of the server that is responsible for defining and following the steps in servicing a request and several modules that actually implement the different phases of handling the request.
As shall be seen later Figure does not capture an important characteristic of the architecture, namely, the predefined order in which modules are called, based on their advertised characteristics.

Figure .Apache Architecture
The idea is to keep the basic server code clean while allowing third-parties to override or extend even basic characteristics.
46. What is port? Differentiate IP and port.
Ans: Port: On computer and telecommunication devices, a port (noun) is generally a specific place for being physically connected to some other device, usually with a socket and plug of some kind. Typically, a personal computer is provided with one or more serial ports and usually one parallel port. The serial port supports sequential, one bit-at-a-time transmission to peripheral devices such as scanners and the parallel port supports multiple-bit-at-a-time transmission to devices such as printers.
    In programming, a port (noun) is a "logical connection place" and specifically, using the Internet's protocol, TCP/IP, the way a client program specifies a particular server program on a computer in a network.
       Port numbers are from 0 to 65535. Ports 0 to 1024 are reserved for use by certain privileged services. For the HTTP service, port 80 is defined as a default and it does not have to be specified in the Uniform Resource Locator.
Difference bet IP and port are as follows:
a port (noun) is generally a specific place for being physically connected to some other device, usually with a socket and plug of some kind.
While IP protocol is part of the Internet layer of the TCP/IP protocol   suite. It is one of the most important Internet protocols because it allows the development and transport of IP datagram’s (data packets), without however ensuring their "delivery".
   
2. The IP address therefore serves to uniquely identify a computer on the network while the port number specifies the application for which data is intended.

47.  What are the port numbers for FTP, HTTP, telnet, POP, finger?
Ans:

Services
Port Number
          FTP                                                   
21
HTTP   
80
TELNET
23
POP   
110
FINGER
79

48. What is best-effort Service? Where it is used and why? What are the limitations of a best-effort service?
Ans: Best-effort Service
    A "Best Effort" service is one which does not provide full reliability. It usually performs some error control (e.g. discarding all frames which may have been corrupted) and may also provided some (limited) retransmission (e.g. CSMA/CD). The delivered data is not however guaranteed. A best effort service, normally requires reliability to be provided by a higher layer protocol.
          Best Effort Services are used in ………
Link Layer - HDLC (UI frames); Ethernet
Network Layer - IP (Datagram’s)
Transport Layer - UDP and UDP-Lite.
Services may be better than best effort, for example provide service guarantees or better expectations of the service. The Differentiated Services (diffserv) and Integrated Services (intserv) frameworks provide this type of service at the network-layer.
There has also been work on less than best effort services, specifically services that are designed to operate in the background. The IP Scavenger service is an example network-layer service that allows file transfers to safely operate in the background without impacting other network users. This service has as yet not been widely used.
 The limitations of a best-effort service……
    There are three different kinds of limitations to the “Best-Effort Service”, packet loss, end-to-end delay, and packet jitter.

Packet Loss
      A UDP segment is encapsulated in an IP datagram, and as the datagram wanders through the network, it passes through a buffer (or a queue) in the routers in order to access outbound links. It is possible that one or more of the buffers in the route from sender to receiver are full and cannot admit the IP datagram and the datagram was discarded and never received. Sending the packets over TCP rather than over UDP could eliminate loss. Recall that TCP retransmits packets that do not arrive at the destination. However, retransmission mechanisms are often considered unacceptable for interactive real-time audio applications such as Internet phone, because they increase end-to-end delay.

End – to – End Delay
    End – to – end delay is the accumulation of transmission, processing, and queuing delays in routers; propagation delays in the links; and end-system processing delays. For highly interactive audio applications, such as Internet phone, a human listener does not perceive end-to-end delays smaller than 150 msec; delays exceeding 400 msecs can seriously hinder the interactivity in voice conversations. The receiving side of an Internet phone application will typically disregard any packet that are delayed more than a certain threshold, for example, more than 400 msecs. Thus, packets that are delayed by more than the threshold are effectively lost.

Packet Jitter
    A crucial component of end-to-end delay is the random queuing delays in the routers. Because of these varying delays within the network, the time from when a packet is generated at the source until it is received at the receiver can fluctuate from packet to packet. This phenomenon is called jitter. The situation is analogous to driving cars on roads. Suppose you and your friend are each driving in your own cars from San Diego to Phoenix . Suppose you are your friend have similar diving styles, and that you both drive at 1000 km/hour, traffic permitting. Finally, suppose your friend starts out one hour before you. Then, depending on intervening traffic, you may arrive at Phoenix more or less than one hour after your friend. If the receiver ignores the presence of jitter and plays out chunks as soon as they arrive, then the resulting audio quality can easily become unintelligible at the receiver. Fortunately, jitter can often be removed by using sequence numbers, timestamps, and a playout delay.


49.     What is jitter? How it is removed at the receiver end?
Answer:    Jitter is the time variation of a periodic signal in electronics and telecommunications, often in relation to a reference clock source. Jitter may be observed in characteristics such as the frequency of successive pulses, the signal amplitude, or phase of periodic signals. Jitter is a significant, and usually undesired, factor in the design of almost all communications links (e.g., USB, PCI-e, SATA, OC-48). In clock recovery applications it is called timing jitter.
In the context of computer networks, the term jitter is often used as a measure of the variability over time of the packet latency across a network. However, for this use, the term is imprecise. The standards-based term is packet delay variation (PDV). PDV is an important quality of service factor in assessment of network performance. A network with constant latency has no variation (or jitter). Packet jitter is expressed as an average of the deviation from the network mean latency.

Fig. Delay Jitter
Consider the end-to-end delays of two consecutive packets.  The difference can be more or less than 20 milliseconds, giving us delay jitter.  Too much jitter can impair the conversation.
There are three main tools useful in removing jitter:
Sequence numbers:
Each packet header contains a sequence number assigned by the sender and incremented for each packet sent.
Timestamps:
Each packet header contains the time at which the data chunk in the packet was generated.
Delayed playout:
The playout of packets is delayed long enough so that most of the packets are received before their playout times.  This playout delay can be fixed or vary adaptively throughout the audio session.  Packets not arriving before their scheduled playout times are considered lost and forgotten.
50.     What is fixed playout delay and adaptive playout delay?
Answer:    Fixed playout delay and adaptive playout delay are explained with the help of examples given below:
Internet Phone: Fixed Playout Delay
Receiver attempts to playout each audio chunk exactly q milliseconds after the chunk was generated.
chunk has time stamp t: play out chunk at t+q
chunk arrives after t+q: data arrives too late for playout, and the data is considered “lost”
There is a significant tradeoff in selecting q:
large q: less loss, but more delay; if delay is too high interactivity is sacrificed
small q: better interactive experience but higher possibility of loss; if too many audio chunks are lost, quality is sacrificed.

Sender generates packets every 20 msec during talk spurt.
First packet received at time r
First playout scenario: playout begins at p
Second playout scenario: playout begins at p’
Adaptive Playout Delay
Goal: minimize playout delay, keeping late loss rate low
Approach: adaptive playout delay adjustment:
Estimate network delay and variance of network delay, and adjust playout delay at beginning of each talk spurt.
Silent periods compressed and elongated, but this is not noticeable.
Chunks still played out every 20 milliseconds during talk spurt.

Dynamic estimate of average delay at receiver:

       where u is a fixed constant (e.g., u = 0.01).
Also useful to estimate the average deviation of the delay, vi :

The estimates di and vi are calculated for every received packet,
    although they are only used at the beginning of a talk spurt.
For the first packet in a talk spurt, playout time is:

where K is a positive constant (such as 4).  The purpose is to set this time far enough in the future to reduce the percentage of lost packets.
Remaining packets in the talk spurt are played out periodically.
51.     Describe forward error correction method for recovering from packet loss.
Answer:    For recovering from packet loss the following Forward Error Correction (FEC) schemes are used:
    Forward Error Correction (FEC): First Scheme
For every group of n audio chunks, create a redundant chunk by exclusive OR-ing the n original chunks.
Send out these n+1 chunks, increasing the bandwidth by factor 1/n.
We can reconstruct the original n chunks if there is at most one lost chunk from the n+1 chunks.
Playout delay needs to be fixed to the time to receive all n+1 packets.
Tradeoff:
Increase n, less bandwidth waste.
Increase  n, longer playout delay needed.
Increase n, higher probability that 2 or more chunks will be lost in the set of n+1 chunks.
Second FEC Scheme
“Piggyback lower quality stream”
Send lower resolution audio stream as the redundant information.
For example, nominal stream PCM at 64 kbps and redundant stream GSM at 13 kbps.

Whenever there is non-consecutive loss, the receiver can conceal the loss, using the lower quality stream.  If this happens infrequently, we are in pretty good shape.
 Can also append (n-1)st and (n-2)nd low-bit rate chunks and so on … we get more redundancy, but increase bandwidth needs and playout time.
Interleaving
Chunks are broken up into smaller units.
For example, we could have 5 msec units, 4 per chunk.
Packet contains small units from different chunks.

If a packet is lost, we still have most of every chunk.
Has no redundancy overhead, and reasonable perceived quality.
But, this adds to playout delay.
Receiver-Based Repair
The receiver attempts to produce a replacement for a lost packet similar to the original.
Many audio signals, particularly speech, exhibit large amounts of short term self-similarity.
Works reasonably well for small loss rates (< 15%) and for small packets (4-40 milliseconds).  If the loss length is the size of a phoneme (5-100 milliseconds), we could lose entire sounds that can be missed and not recovered.
Two main techniques:
Packet repetition:  replace a lost packet with a copy of the immediately preceding packet.  Cheap to do and quality is reasonable if loss is small and infrequent.
Interpolation:  interpolate between preceding and following packet to make replacement.  Better quality, but more expensive to do.





52. Discuss the method interleaving
Ans: In computer science, interleaving is a way to arrange data in a non-contiguous way to increase performance.
It is used in:
In telecommunications: time-division multiplexing (TDM), data transmission, error correction
computer memory
disk storage
Interleaving is mainly used in data communication, multimedia file formats, radio transmission (for example in satellites) or by ADSL. The term multiplexing is sometimes used to refer to the interleaving of digital signal data.
Interleaving is also used for multidimensional data structures, see Z-order (curve).
Interleaving in disk storage
Historically, interleaving was used in ordering block storage on disk-based storage devices such as the floppy disk and the hard disk. The primary purpose of interleaving was to adjust the timing differences between when the computer was ready to transfer data, and when that data was actually arriving at the drive head to be read. Interleaving was very common prior to the 1990s, but faded from use as processing speeds increased. Modern disk storage is not interleaved.
Interleaving in data transmission
Interleaving is used in digital data transmission technology to protect the transmission against burst errors. These errors overwrite a lot of bits in a row, so a typical error correction scheme that expects errors to be more uniformly distributed can be overwhelmed. Interleaving is used to help stop this from happening.
Data is often transmitted with error control bits that enable the receiver to correct a certain number of errors that occur during transmission. If a burst error occurs, too many errors can be made in one code word, and that codeword cannot be correctly decoded. To reduce the effect of such burst errors, the bits of a number of codewords are interleaved before being transmitted. This way, a burst error affects only a correctable number of bits in each codeword, and the decoder can decode the codewords correctly.
This method is often used because it is a less complex and cheaper way to handle burst errors than increasing the power of the error correction scheme.
Error correction
Transmission without interleaving:
Error-free message:                                 aaaabbbbccccddddeeeeffffgggg
Transmission with a burst error:                    aaaabbbbccc____deeeeffffgggg
The codeword dddd is altered in three bits, so either it cannot be decoded at all or it might be decoded incorrectly.
With interleaving:
Error-free code words:                              aaaabbbbccccddddeeeeffffgggg
Interleaved:                                        abcdefgabcdefgabcdefgabcdefg
Transmission with a burst error:                    abcdefgabcd____bcdefgabcdefg
Received code words after deinterleaving:           aa_abbbbccccdddde_eef_ffg_gg
In each of the codewords aaaa, eeee, ffff, gggg, only one bit is altered, so one-bit error-correcting-code will decode everything correctly.
Disadvantages of interleaving
Use of interleaving techniques increases latency. This is because the entire interleaved block must be received before the critical data can be returned.
53. Describe I/O multiplexing using the select() and poll() functions
Ans: Consider approach is to set signal handlers to catch when I/O is available, and then put the process to sleep. This sounds good in theory, if you only have a few open descriptors and infrequent I/O requests. Because the process is sleeping, it will not tie up the CPU, and it will then only execute when I/O is available. However, the problem with this approach is that signal handling is somewhat expensive. This is the approach taken by the select(), poll() and kqueue() interfaces. Through these, the kernel will manage the descriptors and awake the process when I/O is available.
select
The first interface I will cover is select(). The format is:
  int  select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds,  struct timeval *timeout);
The first argument to select has caused some confusion over the years. The proper usage of the nfds argument is to set this to the maximum file descriptor value plus one. In other words, if you have a set of descriptors {0, 1, 8}, the ndfs parameter should be set to 9, because the highest numbered descriptor in your set is 8. Some mistake this parameter to mean the number of total descriptors to be n+1, which in our example would result in 4 incorrectly. Remember that a descriptor is simply an integer value, so your program will need to figure out which is the largest valued descriptor you want to select on.
Select will then examine the next three arguments, readfds, writefds and exceptfds for any pending reading, writing or exceptional conditions, in that order. (For more information, see man(2) select). Note that if no descriptors in readfds, writefds or execptfds are set, the values to select should be set to NULL.
The readfds, writefds and exceptfds arguments are to be set with the four macros listed below.
FD_ZERO(&fdset);
The FD_ZERO macro is used to clear the set bits in the desired descriptor set. One very important note: this macro should always be called when using select; otherwise, select will behave unpredictably.
FD_SET(fd, &fdset);
The FD_SET macro is used when you want to add an individual descriptor to a set of active descriptors.
FD_CLR(fd, &fdset);
The FD_CLR macro is used when you want to remove an individual descriptor from a set of active descriptors.
FD_ISSET(fd, &fdset);
The FD_ISSET macro is used once select returns to test if a particular descriptor is ready for an I/O operation.

Poll
For the most part, the I/O discussed is BSD specific. System V includes support for a special type of I/O known as streams. Streams, along with sockets, have a priority attribute sometimes referred to as the data-band. This data-band can be set to specify a high priority for certain data on the stream. BSD originally did not have support for this feature, but some have added System V emulation, and can support certain types. Because we don't focus on System V, we'll make reference to the data-band or data priority band only. For more information, see System V STREAMS.
The poll function is similar to select:
  int  poll(struct pollfd *fds, unsigned int nfds, int timeout);
Unlike select, which is native to BSD, poll was created by System V Unix and was not supported on earlier versions of BSD. Currently poll is supported on all major BSD systems.
Similar to select, poll will multiplex a set of given file descriptors. When specifying them, you have to use an array of structures, with each structure representing a single file descriptor. One advantage of using poll over select is that you can detect a few rare conditions that select will not. These conditions are POLLERR, POLLHUP, and POLLNVAL, discussed later. Although there is much discussion about choosing select or poll, for the most part, it'll depend on your personal taste. The structure used by poll is the pollfd structure, as below:
  struct pollfd {
      int     fd;             /* which file descriptor to poll */
      short   events;         /* events we are interested in */
      short   revents;        /* events found on return */
  };
 
fd
The fd member is used to specify the file descriptor you wish to use poll on. If you want to remove a particular descriptor, set the fd member for that descriptor to -1. That way you can avoid having to shuffle the array around, and will also clear all events listed in the revents member
events, revents
The events member is a bitmask specifying the events in which you are interested in for that specific descriptor. The revents member is also a bitmask, but its value is set by poll with the event(s) that have occurred on that specific descriptor. These events are defined bellow.
  #define POLLIN          0x0001
The POLLIN event lets you specify that your program will poll for readable data events for the descriptor. Note that this data will not include high priority data, such as out-of-bound data on sockets.
  #define POLLPRI         0x0002
The POLLPRI event is used to specify that your program is interested in polling for any high priority events for the descriptor.
  #define POLLOUT         0x0004
  #define POLLWRNORM      POLLOUT



54. What do you mean by blocking and non-blocking sockets?
Ans:
select() can be used to detect when data is available to read from a socket. However, there are times when its useful to be able to call send(), recv(), connect(), accept(), etc without having to wait for the result. When a user presses (or clicks) a stop button, you want the connect() API to stop trying to connect. When you issue a call to connect(), your program doesn't regain control until either the connection is made, or an error occurs.
The solution to this problem is called "non-blocking sockets"
A blocking socket will not return control until it has sent (or received) all data specified for the operation. This is true only in Linux systems. In other systems, such as FreeBSD, it is normal for a blocking socket not to send all data. The application must check the return value to determine how many bytes have been sent or received and it must resend any data not already processed . It also may cause problems if a socket continues to listen: a program may hang as the socket waits for data that may never arrive.
A socket is typically set to blocking or nonblocking mode using the fcntl() or ioctl() functions.
By default, TCP sockets are in "blocking" mode. For example, when you call recv() to read from a stream, control isn't returned to your program until at least one byte of data is read from the remote site. This process of waiting for data to appear is referred to as "blocking". The same is true for the write() API, the connect() API, etc. When you run them, the connection "blocks" until the operation is complete.
Its possible to set a descriptor so that it is placed in "non-blocking" mode. When placed in non-blocking mode, you never wait for an operation to complete. This is an invaluable tool if you need to switch between many different connected sockets, and want to ensure that none of them cause the program to "lock up."
Non-blocking sockets have a similar effect on the accept() API. When you call accept(), and there isn't already a client connecting to you, it will return 'Operation Would Block', to tell you that it can't complete the accept() without waiting...
The connect() API is a little different. If you try to call connect() in non-blocking mode, and the API can't connect instantly, it will return the error code for 'Operation In Progress'. When you call connect() again, later, it may tell you 'Operation Already In Progress' to let you know that it's still trying to connect, or it may give you a successful return code, telling you that the connect has been made.
Non-blocking sockets can also be used in conjunction with the select() API. In fact, if you reach a point where you actually WANT to wait for data on a socket that was previously marked as "non-blocking", you could simulate a blocking recv() just by calling select() first, followed by recv().
The "non-blocking" mode is set by changing one of the socket's "flags". The flags are a series of bits, each one representing a different capability of the socket. So, to turn on non-blocking mode requires three steps:
Call the fcntl() API to retrieve the socket descriptor's current flag settings into a local variable.
In our local variable, set the O_NONBLOCK (non-blocking) flag on. (being careful, of course, not to tamper with the other flags)
Call the fcntl() API to set the flags for the descriptor to the value in our local variable.
One of the first issues that you’ll encounter when developing your Windows Sockets applications is the difference between blocking and non-blocking sockets. Whenever you perform some operation on a socket, it may not be able to complete immediately and return control back to your program. For example, a read on a socket cannot complete until some data has been sent by the remote host. If there is no data waiting to be read, one of two things can happen: the function can wait until some data has been written on the socket, or it can return immediately with an error that indicates that there is no data to be read.
The first case is called a blocking socket. In other words, the program is "blocked" until the request for data has been satisfied. When the remote system does write some data on the socket, the read operation will complete and execution of the program will resume. The second case is called a non-blocking socket, and requires that the application recognize the error condition and handle the situation appropriately. Programs that use non-blocking sockets typically use one of two methods when sending and receiving data. The first method, called polling, is when the program periodically attempts to read or write data from the socket (typically using a timer). The second, and preferred method, is to use what is called asynchronous notification. This means that the program is notified whenever a socket event takes place, and in turn can respond to that event. For example, if the remote program writes some data to the socket, a "read event" is generated so that program knows it can read the data from the socket at that point.
For historical reasons, the default behavior is for socket functions to "block" and not return until the operation has completed. However, blocking sockets in Windows can introduce some special problems. For 16-bit applications, the blocking function will enter what is called a "message loop" where it continues to process messages sent to it by Windows and other applications. Since messages are being processed, this means that the program can be re-entered at a different point with the blocked operation parked on the program's stack. For example, consider a program that attempts to read some data from the socket when a button is pressed. Because no data has been written yet, it blocks and the program goes into a message loop. The user then presses a different button, which causes code to be executed, which in turn attempts to read data from the socket, and so on.
Blocking socket functions can introduce a different type of problem in 32-bit applications because blocking functions will prevent the calling thread from processing any messages sent to it. Since many applications are single-threaded, this can result in the application being unresponsive to user actions. To resolve the general problems with blocking sockets, the Windows Sockets standard states that there may only be one outstanding blocked call per thread of execution. This means that 16-bit applications that are re-entered (as in the example above) will encounter errors whenever they try to take some action while a blocking function is already in progress. With 32-bit programs, the creation of worker threads to perform blocking socket operations is a common approach, although it introduces additional complexity into the application.
The SocketWrench control facilitates the use of non-blocking sockets by firing events. For example, a Read event is generated whenever the remote host writes on the socket, which tells your application that there is data waiting to be read. The use of non-blocking sockets will be demonstrated in the next section, and is one of the key areas in which a control has a distinct advantage over coding directly against the Windows Sockets API.
In summary, there are three general approaches that can be taken when building an application with the control in regard to blocking or non-blocking sockets:
Use a blocking (synchronous) socket. In this mode, the program will not resume execution until the socket operation has completed. Blocking sockets in 16-bit application will allow it to be re-entered at a different point, and 32-bit applications will stop responding to user actions. This can lead to complex interactions (and difficult debugging) if there are multiple active controls in use by the application.
Use a non-blocking (asynchronous) socket, which allows your application to respond to events. For example, when the remote system writes data to the socket, a Read event is generated for the control. Your application can respond by reading the data from the socket, and perhaps send some data back, depending on the context of the data received.
Use a combination of blocking and non-blocking socket operations. The ability to switch between blocking and non-blocking modes "on the fly" provides a powerful and convenient way to perform socket operations. Note that the warning regarding blocking sockets also applies here.
If you decide to use non-blocking sockets in your application, it’s important to keep in mind that you must check the return value from every read and write operation, since it is possible that you may not be able to send or receive all of the specified data. Frequently, developers encounter problems when they write a program that assumes a given number of bytes can always be written to, or read from, the socket. In many cases, the program works as expected when developed and tested on a local area network, but fails unpredictably when the program is released to a user that has a slower network connection (such as a serial dial-up connection to the Internet). By always checking the return values of these operations, you insure that your program will work correctly, regardless of the speed or configuration of the network.


55. Is RTSP an application-level Protocol? Write the functions of RTSP.
Answer: The Real Time Streaming Protocol, or RTSP, is an application-level protocol for control over the delivery of data with real-time properties. RTSP provides an extensible framework to enable controlled, on-demand delivery of real-time data, such as audio and  video. Sources of data can include both live data feeds and stored  clips. This protocol is intended to control multiple data delivery  sessions, provide a means for choosing delivery channels such as UDP, multicast UDP and TCP, and provide a means for choosing delivery mechanisms based upon RTP.
Functions of RTSP.
RTSP is a streaming protocol; this means it attempts to facilitate scenarios in which the multimedia data is being transferred and rendered (that is, video displayed and audio played) simultaneously.
RTSP establishes and controls either a single or several time-synchronized streams of continuous media.
 RTSP acts as a network remote control for multimedia servers.
RTSP typically uses a Transmission Control Protocol (TCP) connection for control of the streaming media session, although User Datagram Protocol (UDP) also can be used for this purpose.
Clients use RTSP requests to control the session and to request the server to perform actions such as starting or stopping the flow of multimedia data. For each request, a corresponding RTSP response is sent in the opposite direction. Servers can also send RTSP requests to clients; for example, to inform them that session state has changed.

56. What are the various RTSP messages?
Ans.
RTSP Message
RTSP is a text-based protocol and uses the ISO 10646 character set in UTF-8 encoding . Lines are terminated by CRLF, but receivers should be prepared to also interpret CR and LF by themselves as line terminators.
Text-based protocols make it easier to add optional parameters in a self-describing manner. Since the number of parameters and the frequency of commands is low, processing efficiency is not a concern. Text-based protocols, if done carefully, also allow easy implementation of research prototypes in scripting languages such as Tcl, Visual Basic and Perl.
The 10646 character set avoids tricky character set switching, but is invisible to the application as long as US-ASCII is being used. This is also the encoding used for RTCP. ISO 8859-1 translates directly into Unicode with a high-order octet of zero. ISO 8859-1 characters with the most-significant bit set are represented as 1100001x 10xxxxxx.
RTSP messages can be carried over any lower-layer transport protocol that is 8-bit clean.
Requests contain methods, the object the method is operating upon and parameters to further describe the method. Methods are idempotent, unless otherwise noted. Methods are also designed to require little or no state maintenance at the media server.
57. Explain the behavior of an RTP Sender and receiver with neat diagram.
Ans.
The core of any system for delivery of real time audio/video over IP is RTP: It provides the common media transport layer, independent of the signaling protocol and application. The behavior of the RTP Sender and the Receiver is described below:
Behavior of RTP Sender
A sender is responsible for capturing and transforming audiovisual data for transmission, as well as for generating RTP packets. It may also participate in error correction and congestion control by adopting the transmitted media stream in response to receiver feedback.
    Uncompressed media date-audio or video is captured into a buffer, from which compressed frames are produced. Frames may be encoded in several ways depending on the compression algorithm used, and encoded frames may depend upon both earlier and later data.
    Compressed frames are loaded into RTP packets, ready for sending. If frames are large, they may be fragmented into several RTP packets: if they are small, several frames may be bundled into a single RTP packets. Depending on the error correction scheme in use, a channel coder may be used to generate error correction packets or to reorder packets before transmission.
    After the RTP packets have been sent, the buffered media data corresponding to those packets is eventually freed. The sender must not discard data that might be needed for error correction or for encoding process. The requirement may mean that the sender must buffer the data for sometime after the corresponding packets have been sent, depending codec and the error correction scheme used.
    The sender is responsible for generating periodic status reports fir the media streams it is generating, including those required for lip synchronization. It also receives reception quality feedback from other participants and may use that information to adopt its transmission.


Fig1. RTP Sender

Behavior of an RTP receiver
A receiver is responsible for collecting RTP packets from the network, correcting any losses, recovering the timing, decompressing the media, and presenting the result to the user. It also sends reception quality feedback, allowing the sender to adopt the transmission to the receiver, and it maintains a database of participants in the session. A possible block diagram for the receiving process is shown in the figure; implementations sometimes perform the operations in a different order depending on their needs.
    The first of the receive process is to collect packets from the network, validate them for correctness, and insert them into a sender specific input queue. Packets are collected from the input queue and passed to an optional channel-coding routine to correct for loss. Following the channel coder, packets are inserted into a source-specific playout buffer. The playout buffer is ordered by timestamp and the process of inserting packets into the buffer corrects corrects any reordering induced during transport. Packets remain in the playout buffer until complete frames have been received and they are additionally buffered to remove any variation in interpacket timing caused by the trade-offs involved in the design of RTP and how they influence applications that use RTP.

Fig2. RTP Receiver


58. What is the difference between end-to-end delay and delay jitter? What are the
Causes of delay jitter?
Answer:
 Difference between end-to-end delay and delay jitter:
End-to-end delay is the time it takes a packet to travel across the network from source to destination. Delay jitter is the fluctuation of end-to-end delay from packet to the next packet.
59. Which are the four kinds of components specifies by the H.323 standard?
Explain

Ans:  H.323 is a standard for real-time audio and video conferencing among end systems on the Internet.  It also covers how end systems attached to the Internet communicate with telephones attached to ordinary circuit-switched telephone networks.


                                                Fig: H.323 Network Elements

There are four components in a standard H.323 network:
Terminals
Gateways
Gatekeepers
Multipoint control units(MCU)
Terminals:
Terminals are the LAN client endpoints in an IP telephony network. An H.323 terminal can communicate with other H.323 terminals, an H.323 gateway, or an H.323 MCU. All terminals must support voice communication. H.323 terminals support multipoint conversations and provide the ability to initiate ad-hoc conferences. They also have multicast features which allow multiple people to participate in a call without centralized mixing or switching.
Gateways :
Gateways enable standard telephones to use VoIP services. They provide communication between H.323 terminals and terminals connected to either an IP-based network or another H.323 gateway. The gateway functions as a translator, providing the interface between the PSTN and the IP network. The gateway is responsible for mapping the call signaling and control protocols between dissimilar networks. It is also responsible for media mapping (i.e., multiplexing, rate matching, audio transcoding) between the networks.
Gatekeepers :
Gatekeepers are the most important component in an H.323 environment. A network of H.323 terminals and gateways under the control of a particular gatekeeper forms an integrated sub-network within the larger IP network environment. The gatekeeper's functions include:-
Directory server: Using information obtained during terminal registration, this function translates an H.323 alias address to an IP (transport) address. This allows the user to have meaningful, unchanging names to reference other users in the system. These names are arbitrary and may appear similar to those used in e-mail or voice mail applications.
Supervisory: The gatekeeper may be used to grant permission to make a call. This can be used to apply bandwidth limits reducing the likelihood of congestion within the network.
Call signaling: The gatekeeper may perform call routing functions to provide supplementary services. It can also provide Multipoint Controller functionality supporting calls with a large number of participants.
Call management: The gatekeeper may be used to perform call accounting and call management. 
Multipoint control units (MCUs) :
A multipoint control unit (MCU) is an endpoint in the network. It provides the ability for three or more terminals or gateways to participate in a multipoint conference. An MCU provides conference management, media switching and multipoint conferencing.
      There are three types of multipoint conferences:-
Centralized: All terminals have point-to-point communication streams with the MCU. The MCU is responsible for management of the conference. It receives, processes, and sends the voice packets to other terminals.
Decentralized: Terminals communicate directly with each other. An MCU is not directly involved.
Mixed multipoint: This represents a mixture of centralized and decentralized conferences. The MCU ensures operations of the conference are transparent to the each terminal.





60. How are RTP and RTCP packets as part of the same session distinguished?

Ans: Real-time Transport Protocol (RTP) is a protocol designed for use in online video conferencing applications involving multiple participants. The RTP Control Protocol (RTCP) is also a protocol which a multimedia networking application can use in conjunction with RTP. While the RTP provides the transport of real-time data packets, the RTP Control Protocol (RTCP) monitors the quality of service provided to existing RTP sessions.


Fig: RTP and RTCP Packet Delivery System
Both RTP and RTCP use UDP connection for communication. The primary function of RTCP is to provide feedback about the quality of RTP data distribution. RTCP packets are transmitted by each participant in an RTP session to all other participants in the session. For an RTP session, typically there is a single multicast address, and all RTP and RTCP packets belonging to the session use the multicast address. In a RTP session, RTP and RTCP packets are distinguished from each other through the use of distinct port numbers.
61. What are the advantages of circuit switching network over a packet switching
network?
Ans: In a circuit-switched network, a dedicated communications path is established between two stations through the nodes of the network. That path is a connected sequence of physical links between nodes. On each link, a logical channel is dedicated to the connection.
But in Packet Switching, no direct physical link is established. Data are sent out in a sequence of small chunks, called packets. Each packet is passed through the network from node to node along some path leading from source to destination. At each node, the entire packet is received, stored briefly, and then transmitted to the next node.
The advantages of circuit switching over packet switching are:-
Circuit-switching is more reliable than packet-switching since a dedicated connection is established between the communicating devices.
Data loss is less in circuit switching than packet switching.
Circuit switching is more suitable than packet switching for voice communications.
Delays in circuit switching are lesser than those in packet switching.

62. Write the functions of SIP.

Ans: SIP is an application-layer control protocol that can establish, modify, and terminate multimedia sessions (conferences) such as Internet telephony calls. Other feasible application examples include video conferencing, streaming multimedia distribution, instant messaging, presence information and online games. The protocol can be used for creating, modifying and terminating two-party (unicast) or multiparty (multicast) sessions consisting of one or several media streams.

SIP works between the session and application layers of the OSI model and is not confined to any one IP version. This means that it can work in and between IPv4 and IPv6 models. With the desire to keep SIP as flexible as possible, most of SIP's message and header field syntax is derived from the HTTP/1.1 specification, but is not tied to the HTTP/1.1 protocol.

SIP Functionality
SIP provides the set-up/establishing, tying together, and tear-down/terminating of multimedia communications. SIP does this by providing five different functions:



1. User Location
SIP determines user locations by a registration process. When a soft-phone is activated on a laptop, it sends out a registration to the SIP server announcing availability to the communications network. Voice over-IP (VoIP) phones, cellular phones, or even complete teleconferencing systems can be registered as well. Depending on the registration point chosen, there may be several different locations registered simultaneously.

2. User Availability
User availability is simply a method of determining whether or not a user would be willing to answer a request to communicate. If a user “calls” and no one answers, SIP determines that a user is not available. A user can have several locations registered, but might only accept incoming communications on one device. If that is not answered, it transfers to another device, or transfers the call to another application, such as voicemail.

3. User Capabilities
With all the various different methods and standards of multimedia communications, something is needed to check for compatibility between the communications and the users’ capabilities. For example, if a user has an IP phone on their desk, a white-board conference via that device would not work. This function also determines which encryption/decryption methods a user can support.

4. Session Setup
SIP establishes the session parameter for both ends of the communications — more specifically, where one person calls and the other answers. SIP provides the means to setup and/or establish communications.

5. Session Management
This function provides the greatest amount of user awe. Provided a device is capable, a user could transfer from one device to another — such as from an IP-based phone to a laptop —without causing a noticeable impact. A user’s overall capabilities would change — such as being able to start new applications such as white-board sharing — perhaps affecting the voice quality temporarily as SIP re-evaluates and modifies the communications streams to return the voice quality. With SIP session management, a user can also change a session by making it a conference call, changing a telephone call to a video conference, or opening an in-house developed application. And finally, SIP terminates the communications.
63. What do you mean by content delivery network?
Ans: A content delivery network or content distribution network (CDN) is a system of computers containing copies of data, placed at various points in a network so as to maximize bandwidth for access to the data from clients throughout the network. A client accesses a copy of the data near to the client, as opposed to all clients accessing the same central server, thereby causing a bottleneck near that server.
Content types include web objects, downloadable objects (media files, software, documents), applications, real time media streams, and other components of internet delivery (DNS, routes, and database queries).
The CDN is a caching system that directs customers to the nearest caching server (or node). As the customer accesses any website, they retrieve content from the node instead of the origin server, reducing the load on that server and allowing for much faster delivery of the content. With On-demand Propagation, content from the origin site is instantly pushed out to each caching server only when it is being requested from a specific geographic location. This results in increased performance and cost savings. Our online CDN Control Panel provides us with real-time information and easy-to-use tools for monitoring, distributing, and managing our contents.

64. Give an example of scheduling policy and explain.
Ans: CPU scheduling is the basis of multi-programmed operating systems. Almost all computer resources are scheduled before use. There are many different scheduling policies. One of them is- First –Come, First-Served Scheduling.

It is the simplest CPU scheduling policy. With this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters the ready queue,its PCB is linked onto the tail of the queue. When the CPU is free,it is allocated to the process at the head of the queue. The running process is then removed from the queue. The code for FCFS scheduling is simple to write and understand.
The FCFS scheduling policy is non preemptive. Once the CPU has been allocated to a process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O.  The FCFS policy is particularly troublesome for time sharing systems,where it is important that each user get a share of the CPU at regular intervals. It would be disastrous to allow one process to keep the CPU for an extended period.    
65. What do you mean by Quality of Service in the Internet?
Ans:
In the field of computer networking and other packet-switched telecommunication networks, the traffic engineering term quality of service (QoS) refers to resource reservation control mechanisms rather than the achieved service quality. Quality of service is the ability to provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow. For example, a required bit rate, delay, jitter, packet dropping probability and/or bit error rate may be guaranteed. Quality of service guarantees are important if the network capacity is insufficient, especially for real-time streaming multimedia applications such as voice over IP, online games and IP-TV, since these often require fixed bit rate and are delay sensitive, and in networks where the capacity is a limited resource, for example in cellular data communication. In the absence of network congestion, QoS mechanisms are not required.
A network or protocol that supports QoS may agree on a traffic contract with the application software and reserve capacity in the network nodes, for example during a session establishment phase. During the session it may monitor the achieved level of performance, for example the data rate and delay, and dynamically control scheduling priorities in the network nodes. It may release the reserved capacity during a tear down phase.
A best-effort network or service does not support quality of service. An alternative to complex QoS control mechanisms is to provide high quality communication over a best-effort network by over-provisioning the capacity so that it is sufficient for the expected peak traffic load.
In the field of telephony, quality of service was defined in the ITU standard X.902 as "A set of quality requirements on the collective behavior of one or more objects". Quality of Service comprises requirements on all the aspects of a connection, such as service response time, loss, signal-to-noise ratio, cross-talk, echo, interrupts, frequency response, loudness levels, and so on. A subset of telephony QoS is Grade of Service (GOS) requirements, which comprises aspects of a connection relating to capacity and coverage of a network, for example guaranteed maximum blocking probability and outage probability.

No comments:

Post a Comment