Wednesday 28 August 2013

Write a C program that illustrates the creation of child process using fork system call. One process finds sum of even series and other process finds sum of odd series.

#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
#include <fcntl.h>
int main()
{
int i,n,sum=0;
pid_t pid;
system(“clear”);
printf(“Enter n value:”);
scanf(“%d”,&n)
pid=fork();
if(pid==0)
{
printf(“From child process\n”);
for(i=1;i<n;i+=2)
{
printf(“%d\”,i);
sum+=i;
}
printf(“Odd sum:%d\n”,sum);
}
else
{
printf(“From process\n”);
for(i=0;i<n;i+=2)
{
printf(“%d\”,i);
sum+=i;
}
printf(“Even sum:%d\n”,sum);
}
}

Demonstrate how and when can you use the commands: vi, cat, chmod, grep, man, pwd, ps, kill, mkdir, rm

Vi:
The vi command is actually a text editor that comes as standard with most Linux packages.

Cat
This command displays the contents of target-file(s) on the screen, one after the other. You can also use it to create files from keyboard input as follows (> is the output redirection operator).

chmod
This command is used to change the mode for files

Pwd:
This command displays the full absolute path to the your current location in the filesystem.

Man:
Man is the online UNIX user manual, and it can be used to get help with commands and find out about what options are supported. It has quite a terse style which is often not that helpful, so some users prefer to use the info utility if is installed.

Mkdir:
This command created a subdirectory called directory in the current working directory. You can only create subdirectories in a directory if you have write permission on that directory.

Rm (remove/delete)
This command removes the specified files. Unlike other operating system, it is almost impossible to recover a deleted files unless you have a backup. So use this command with care.

Explain the use of following variables: IFS, PATH, LOGNAME, PROMPT

IFS (Internal Field Separator)
IFS can be redefined to parse one or more lines of data whose fields are not delimited by the default white-space characters. IFS contains a string of characters that are used as word separators in the command line. The string normally consist of the space, tab and the new line character.
PATH
PATH is a list of directories that the shell uses to locate executable files for the commands.
LOGNAME
This variable shows your user name. when you wander around in the file system, you may sometimes forget your login name. LOGNAME is used when in shell script which require to know just the username before deciding what it should do.
Usage: logname
Print the name of the current user.

Explain the use of sync and fsck.

If you have to shut a system down extremely urgently or for some reason cannot use shutdown, it is at least a good idea to first run the command:
    # sync 
which forces the state of the file system to be brought up to date.
System startup:
At system startup, the operating system performs various low-level tasks, such as initialising the memory system, loading up device drivers to communicate with hardware devices, mounting filesystems and creating the init process (the parent of all processes). init's  primary responsibility is to start up the system services as specified in /etc/inittab. Typically these services include gettys (i.e. virtual terminals where users can login), and the scripts in the directory /etc/rc.d/init.d which usually spawn high-level daemons such as httpd (the web server). On most UNIX systems you can type dmesg to see system startup messages, or look in /var/log/messages.
If a mounted filesystem is not "clean" (e.g. the machine was turned off without shutting down properly), a system utility fsck is automatically run to repair it. Automatic running can only fix certain errors, however, and you may have to run it manually:

# fsck filesys 
where filesys is the name of a device (e.g. /dev/hda1) or a mount point (like /). "Lost" files recovered during this process end up in the lost+found directory. Some more modern file systems called "journaling" file systems don't require fsck, since they keep extensive logs of file system events and are able to recover in a similar way to a transactional database.

Explain the term inter-process communication. What are various approaches to achieve the same.

Inter Process Communication:
·        Pipes
·        Fifos (Named Pipes)
·        Message queues
·        Semaphores
·        Shared memory
·        Sockets
·        Communications via Pipes
Pipes:
Once we got our processes to run, we suddenly realize that they cannot communicate. One of the mechanisms that allow related-processes to communicate is the pipe, or the anonymous pipe.
A pipe is a one –way mechanism that allows two related processes to send a byte stream from one of them to the other one.
Named Pipe:
A named pipe is a pipe whose access point is a file kept on the file system.
By opening this file for reading, a process gets access to the reading end of the pipe. By opening the file for writing, the process gets access to the writing end of the pipe.
Message Queues:
A message queues is a queue onto which messages can be placed. A message is composed of a message type and message data.
            A message queue can be either private, or public. If it is private, it can be accessed only by its creating process or child processes of that creator. If its public, it can accessed by any process that knows the queue’s key.
Semaphore:
A semaphore is a resource that contains an integer value and allows processes to synchronize by testing and setting this value in a single atomic operation. This means that the process that tests the value of a semaphore and sets it to a different value, is guaranteed and no other process will interface with the operation in the middle.
Shared Memory:
As we might know, many methods were created in order to let processes communicate. All this communication is done in order to share data. The problem is that all these methods are sequential in nature. We want to allow processes to share data in a random access manner. For this, we will use shared memory that comes to the rescue.

            With shared memory, we declare a given section in the memory as one that will be used simultaneously by several processes. This means that the data found in this memory section will be seen by several processes. This also means that several processes. This also means that several processes might try to alter this memory area at the same time, and thus some method should be used to synchronize their access to this memory area.

Discuss the architecture of UNIX operating system with appropriate diagram.



The kernel of UNIX is the hub of the operating system: it allocates time and memory to programs and handles the file store and communications in response to system calls.
As an illustration of the way that the shell and the kernel work together, suppose a user types rm my file (which has the effect of removing the file my file). The shell searches the file store for the file containing the program rm, and then requests the kernel, through system calls, to execute the program rm on my file. When the process rm my file has finished running, the shell then returns the UNIX prompt % to the user, indicating that it is waiting for further commands.
Amongst the functions performed by the kernel are:
·        managing the machine's memory and allocating it to each process.
·        scheduling the work done by the CPU so that the work of each user is carried out as efficiently as is possible.
·        organising the transfer of data from one part of the machine to another.
·        accepting instructions from the shell and carrying them out.
·        Enforcing the access permissions that are in force on the file system
The shell:
The shell acts as an interface between the user and the kernel. When a user logs in, the login program checks the username and password, and then starts another program called the shell. The shell is a command line interpreter (CLI). It interprets the commands the user types in and arranges for them to be carried out. The commands are themselves programs: when they terminate, the shell gives the user another prompt (% on our systems).
The user can customise his/her own shell, and users can use different shells on the same machine.
The shell keeps a list of the commands you have typed in. If you need to repeat a command, use the cursor keys to scroll up and down the list or type history for a list of previous commands.
You can use any one of these shells if they are available on your system. And you can switch between the different shells once you have found out if they are available.
·        Bourne shell (sh)
·        C shell (csh)
·        TC shell (tcsh)
·        Korn shell (ksh)

·        Bourne Again SHell (bash)

Differentiate between FQDN and PQDN

The difference between FQDN and PQDN
FQDN
A fully qualified domain name (FQDN) is the complete domain name for a specific computer, or host, on the Internet. The FQDN consists of two parts: the hostname and the domain name. For example, an FQDN for a hypothetical mail server might be
mymail.somecollege.edu. The hostname is my mail, and the host is located within the domainsomecollege.edu.

PQDN
If a label is not terminated by a null string, it is called a partially qualified domain name (PQDN). A PQDN starts from a node, but it does not reach the root. It is used when the name to be resolved belongs to the same site as the client. Here the resolver can supply the missing part, called suffix, to create an FQDN.

Explain the various steps in TCP congestion control.

TCP Congestion Control Algorithms One big difference between TCP and UDP is the congestion control algorithm. The TCP congestion algorithm prevents a sender from overrunning the capacity of the network (for example, slower WAN links). TCP can adapt the sender's rate to network capacity and attempt to avoid potential congestion situations. In order to understand the difference between TCP and UDP, understanding basic TCP congestion control algorithms is very helpful. Several congestion control enhancements have been added and suggested to TCP over the years. This is still an active and ongoing research area, but modern implementations of TCP contain four intertwined algorithms as basic Internet standards:

·        Slow
·        Congestion
·        Fast
·        Fast recovery

Slow Start: Old implementations of TCP start a connection with the sender injecting multiple segments into the network, up to the window size advertised by the receiver. Although this is OK when the two hosts are on the same LAN, if there are routers and slower links between the sender and the receiver, problems can arise. Some intermediate routers cannot handle it, packets get dropped, and retransmission results and performance is degraded.
Congestion Avoidance: The assumption of the algorithm is that packet loss caused by damage is very small (much less than 1%). Therefore, the loss of a packet signals congestion somewhere in the network between the source and destination. There are two indications of packet loss:
A timeout occurs.
Duplicate ACKs are received.
Congestion avoidance and slow start are independent algorithms with different objectives. But when congestion occurs, TCP must slow down itstransmission rate of packets into the network and invoke slow start to get things going again. In practice, they are implemented together. Congestion avoidance and slow start require that two variables be maintained for each connection:
A congestion window, cwnd
A slow start threshold size, ssthresh

Fast Retransmit: Fast retransmit avoids having TCP wait for a timeout to resend lost segments. Modifications to the congestion avoidance algorithm were proposed in 1990. Before describing the change, realize that TCP can generate an immediate acknowledgment (a duplicate ACK) when an out-of-order segment is received. This duplicate ACK should not be delayed. The purpose of this duplicate ACK is to let the other end know that a segment was received out of order and to tell it what sequence number is expected.
Fast recovery: After fast retransmit sends what appears to be the missing segment, congestion avoidance, but not slow start, is performed. This is the fast recovery algorithm. It is an improvement that allows high throughput under moderate congestion, especially for large windows. The reason for not performing slow start in this case is that the receipt of the duplicate ACKs tells TCP more than just a packet has been lost. Because the receiver can only generate the duplicate ACK when another segment is received, that segment has left the network and is in the receiver's buffer. That is, there is still data flowing between the two ends, and TCP does not want to reduce the flow abruptly by going into slow start.

Discuss User Datagram protocol.

The User Datagram Protocol (UDP) is one of the core members of the Internet protocol suite (the set of network protocols used for the Internet). With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on anInternet Protocol (IP) network without prior communications to set up special transmission channels or data paths. The protocol was designed by David P. Reed in 1980 and formally defined in RFC 768.
UDP uses a simple transmission model with a minimum of protocol mechanism.[1] It has no handshaking dialogues, and thus exposes any unreliability of the underlying network protocol to the user's program. As this is normally IP over unreliable media, there is no guarantee of delivery, ordering or duplicate protection. UDP provides checksums for data integrity, and port numbers for addressing different functions at the source and destination of the datagram.
UDP is suitable for purposes where error checking and correction is either not necessary or performed in the application, avoiding the overhead of such processing at the network interface level. Time-sensitive applications often use UDP because dropping packets is preferable to waiting for delayed packets, which may not be an option in a real-time system.[2] If error correction facilities are needed at the network interface level, an application may use the Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP) which are designed for this purpose.
A number of UDP's attributes make it especially suited for certain applications.
·         It is transaction-oriented, suitable for simple query-response protocols such as the Domain Name System or the Network Time Protocol.
·         It provides datagrams, suitable for modeling other protocols such as in IP tunneling or Remote Procedure Call and the Network File System.
·         It is simple, suitable for bootstrapping or other purposes without a full protocol stack, such as the DHCP and Trivial File Transfer Protocol.
·         It is stateless, suitable for very large numbers of clients, such as in streaming media applications for example IPTV
·         The lack of retransmission delays makes it suitable for real-time applications such as Voice over IPonline games, and many protocols built on top of the Real Time Streaming Protocol.
Works well in unidirectional communication, suitable for broadcast information such as in many kinds of service discovery and shared information such as broadcast time or Routing Information Protocol

Bring out the differences between POP and IMAP4.

POP3
IMAP
Since email needs to be downloaded into desktop PC before being displayed, you may have the following problems for POP3 access:
  • You need to download all email again when using another desktop PC to check your email.
  • May get confused if you need to check email both in the office and at home.
The downloaded email may be deleted from the server depending on the setting of your email client.
Since email is kept on server, it would gain the following benefits for IMAP access:
  • No need to download all email when using other desktop PC to check your email.
  • Easier to identify the unread email.
All messages as well as their attachments will be downloaded into desktop PC during the 'check new email' process.
A whole message will be downloaded only when it is opened for display from its content.
Mailboxes can only be created on desktop PC. There is only one mailbox (INBOX) exists on the server.
Multiple mailboxes can be created on the desktop PC as well as on the server.
Filters can transfer incoming/outgoing messages only to local mailboxes.
Filters can transfer incoming/outgoing messages to other mailboxes no matter where the mailboxes locate (on the server or the PC).
Outgoing email is stored only locally on the desktop PC.
Outgoing email can be filtered to a mailbox on server for accessibility from other machine.
Messages are deleted on the desktop PC. Comparatively, it is inconvenient to clean up your mailbox on the server.
Messages can be deleted directly on the server to make it more convenient to clean up your mailbox on the server.
Messages may be reloaded onto desktop PC several times due to the corruption of system files.
The occurrence of reloading messages from the server to PC is much less when compared to POP3.

Explain the concept of multi-protocol encapsulation in ATM networks.

ATM-based networks are of increasing interest for both local and wide area applications. The ATM architecture is different from the standard LAN architectures and, for this reason, changes are required so that traditional LAN products will work in the ATM environment. In the case of TCP/IP, the main change required is in the network interface to provide support for ATM. There are several approaches already available, two of which are important to the transport of TCP/IP traffic.
Multiprotocol Encapsulation over ATM is specified in RFC 2684. It defines two mechanisms for identifying the protocol carried in ATM Adaptation Layer 5 (AAL5) frames. It replaces RFC 1483, a standard data link access protocol supported by DSL modems.
RFC 2684 describes two encapsulation mechanisms for network traffic: Virtual and DSL modems often include a setting for RFC 1483 bridging. This is distinct from other "bridge modes" commonly found in combined DSL modems and routers, which turn off the router portion of the DSL modem.