Tuesday 12 February 2013

Differentiate linkers and loaders.


Linker: Linker is a program that takes one or more objects generated by a compiler and combines them into a single executable program. 
Linker is a program that takes one or more objects generated by a compiler and combines them into a single executable program.
Once a linker has scanned all of the input files to determine segment sizes, symbol definitions and symbol references, figured out which library modules to include, and decided where in the output address space all of the segments will go, the next stage the heart of the linking process, relocation is. We use relocation to refer both to the process of adjusting program addresses to account for non-zero segment origins, and the process of resolving references to external symbols, since the two are frequently handled together. The linker's first pass lays out the positions of the various segments and collects the segment-relative values of all global symbols in the program. Once the linker determines the position of each segment, it potentially needs to fix up all storage addresses to reflect the new locations of the segments. On most architecture, addresses in data are absolute, while those embedded in instructions may be absolute or relative. The linker needs to fix up accordingly, as we'll discuss later
Loaders: Loader is the part of an operating system that is responsible for loading programs from executable (i.e., executable files) into memory, preparing them for execution and then executing them.
Loader is the part of an operating system that is responsible for loading programs from executable (i.e., executable files) into memory, preparing them for execution and then executing them. In computing, a loader is the part of an operating system that is responsible for loading programs. It is one of the essential stages in the process of starting a program, as it places programs into memory and prepares them for execution. Loading a program involves reading the contents of executable file, the file containing the program text, into memory, and then carrying out other required preparatory tasks to prepare the executable for running. Once loading is complete, the operating system starts the program by passing control to the loaded program code.

What are macros and macro processors? Explain in brief.


A macro is similar to a subroutine (or a procedure), but there are important Differences between them. A subroutine is a section of the program that is written Once, and can be used many times by simply calling it from any point in the program. Similarly, a macro is a section of code that the programmer writes (defines) once, and then can use many times. The main difference between a subroutine and a Macro is that the former is stored in memory once (just one copy), whereas the Latter is duplicated as many times as necessary. Macros involve two separate phases. Handling the definition and Handling the expansions. A macro can only be defined once but it can be expanded many times. Handling the definition is a relatively simple process. The assembler reads the Definition from the source files and saves it in a special table, the Macro Definition Table (MDT).The assembler does not try to check the definition for errors, to assemble it, execute it, or do anything else with it.
A macro processor is a program that copies a stream of text from one place to another, making a systematic set of replacements as it does so. Macro processors are often embedded in other programs, such as assemblers and compilers. Sometimes they are standalone programs that can be used to process any kind of text.
Macro processors have been used for language expansion (defining new language constructs that can be expressed in terms of existing language components), for systematic text replacements that require decision making, and for text reformatting (e.g. conditional extraction of material from an HTML file).

Explain the significance of Lexical analysis and Syntax analysis.


lexical analysis is the process of converting a sequence of characters into a sequence of tokens. A program or function which performs lexical analysis is called a lexical analyzer, laxer or scanner. A laxer often exists as a single function which is called by a parser or another function. The lexical analysis or scanning of a program breaks it into a Sequence of tokens. For example, a sequence of letters and digits May be transformed into a single token representing an identi_er. Similarly, numbers of various types are tokens. Some tokens may correspond to individual symbols in the original string. For Example, the character + may generate a single token. Even in this Case, the resulting token has been recognized as an operator and The token will normally carry this information. Each type of token is denned by a regular language. The lexical Analysis of a program basically simulates a _niter state machine.

Syntax analysis: syntax analysis or parsing is about discovering structure in text and is used to determine whether or not a text conforms to an expected format. "Is this a textually correct Java program?" or "Is this bibliographic entry textually correct?" are typical questions that can be answered by syntax analysis. We are mostly interested in syntax analysis to determine that the source code of a program is correct and to convert it into a more structured representation (parse tree) for further processing like semantic analysis or transformation.
Syntax analysis is one of the very mature areas of language theory and many methods have been proposed to implement parsers. Giving even a brief overview of these techniques is beyond the scope of this paper, but sees the references in the section called “Further reading”. In the Meta-Environment we use a parsing method called Scanner less Generalized Left-to-Right parsing or SGLR for short. The least we can do is explain what this method is about. The method depends on writing a separate parsing procedure for each kind of syntactic structure, such as if statement, assignment statement, expression and so on, and each of these is only responsible for analysing its own kind of structure. If any structure contains another structure then the parsing procedure can call the procedure for this contained structure.

What is MASM? Explain its features.


MASM: Microsoft Macro Assembler
The Microsoft Macro Assembler (MASM) is an assembler for the x86 family of microprocessors, originally produced Microsoft MS-DOS operating system.
The features of MASM are listed below:
       i.            It supported a wide variety of macro facilities and structured programming idioms, including high-level constructions for looping, procedure calls and alternation (therefore, MASM is an example of a high-level assembler).
    ii.            MASM is one of the few Microsoft development tools for which there was no separate 16-bit and 32-bit version.
 iii.            Assembler affords the programmer looking for additional performance a three pronged approach to performance based solutions.
  iv.            MASM can build very small high performance executable files that are well suited where size and speed matter.
     v.            When additional performance is required for other languages, MASM can enhance the performance of these languages with small fast and powerful dynamic link libraries.
  vi.            For programmers who work in Microsoft Visual C/C++, MASM builds modules and libraries that are in the same format so the C/C++ programmer can build modules or libraries in MASM and directly link them into their own C/C++ programs. This allows the C/C++ programmer to target critical areas of their code in a very efficient and convenient manner, graphics manipulation, games, very high speed data manipulation and processing, parsing at speeds that most programmers have never seen, encryption, compression and any other form of information processing that is processor intensive.
vii.            MASM32 has been designed to be familiar to programmers who have already written API based code in Windows. The invoke syntax of MASM allows functions to be called in much the same way as they are called in a high level compiler.

Draw the flowchart for Pass 1 assembler and explain it.

PASS 1. FLOW CHART
The primary function performed by the analysis phase is the building of the symbol table. For this purpose it must determine the addresses with which the symbol names used in a program are associated. It is possible to determine some address directly, e.g. the address of the first instruction in the program, however others must be inferred.
To implement memory allocation a data structure called location counter (LC) is introduced. The location counter is always made to contain the address of the next memory word in the target program. It is initialized to the constant. Whenever the analysis phase sees a label in an assembly statement, it enters the label and the contents of LC in a new entry of the symbol table. It then finds the number of memory words required by the assembly statement and updates; the LC contents. This ensure: that LC points to the next memory word in the target program even when machine instructions have different lengths and DS/DC statements reserve different amounts of memory. To update the contents of LC, analysis phase needs to know lengths of different instructions. This information simply depends on the assembly language hence the mnemonics table can be extended to include this information in a new field called length. We refer to the processing involved in maintaining the location counter as LC processing.

Explain language processing activities with suitable diagrams.


Language processing activities arise to bridge the ideas of software designer with actual execution on the computer system. Due to the differences between the manners in which a software designer describes the ideas concerning the behaviour of software and the manner in which these ideas are implemented in a computer system. The designer expresses the ideas in terms related to the application domain of the software. To implement these ideas, their description has to be interpreted in terms related to the execution domain of the computer system. We use the term semantics to represent the rules of meaning of a domain, and the term semantic gap to represent the difference between the semantics of two domains. The fundamental language processing activities can be divided into those that bridge the specification gap and those that bridge the execution gap.
· Program Generation Activities
· Program Execution Activities
A program generation activity aims at automatic generation of a program. The source language is a specification language of an application domain and the target language is typically a procedure oriented PL. A program execution activity, organizes this execution of a program written in a PL on a computer system. Its source language could be a procedure-oriented language or a problem oriented language.
 Program Generation:
The program generator is a software system which accepts the specification of a program to be generated, and generates a program in the target PL. We call this the program generator domain. The specification gap is now the gap between the application domain and the program generator domain. This gap is smaller than the gap between the application domain and the target PL domain.
Reduction in the specification gap increases the reliability of the generated program. Since the generator domain is close to the application domain, it is easy for the designer or programmer to write the specification of the program to be generated.
Program Execution:
Two popular models for program execution are:
· Translation
· Interpretation
Program Translation
Program translation model bridges the execution gap by translating a program written in a PL, called the source program (SP), into an equivalent program in the machine or assembly language of the computer system, called the target program (TP).

Draw the diagram of logical structure of oracle database and explain it in brief.


The Oracle database is divided into increasingly smaller logical units to manage, store, and retrieve data efficiently and quickly. The figure shows the relationships between the logical structures of the database.
Logical structure mainly consists of table space, segments, extents, and oracle data blocks
TABLE SPACE 
 Each database is logically divided into one or more table spaces . One or more data files are explicitly created for each table space to physically store the data of all logical structures in a tables pace. The combined size of the data files in a table space is the total storage capacity of the table space.
SEGMENT 
 A segment is a set of extents allocated for a certain logical structure. The segments can be of one of following type data segment, index segment, temporary segment, rollback segment.
EXTENT 
The next level of logical database space is an extent. An extent is a specific number of contiguous data blocks, obtained in a single allocation, and used to store a specific type of information.
ORACLE DATA BLOCKS
 At the finest level of granularity, Oracle database data is stored in data blocks. One data block corresponds to a specific number of bytes of physical database space on disk. The standard block size is specified by the DB_BLOCK_SIZE initialization parameter. In addition, you can specify up to five other block sizes.

Draw a diagram of oracle architecture and explain it briefly.

The figure displays the architecture of the Oracle Database 9i. It is broadly divided into the memory components which form the Oracle instance and the physical database components where different kinds of data are stored. The Oracle server consists of physical files and memory components. The Oracle 9i Database product is made up three main components namely: ·
The Oracle Server: This is the Oracle database management system that is able to store, manage and manipulate data. It consists of all the files, structures, processes that form Oracle Database 9i. The Oracle server is made up of an Oracle instance and an Oracle database.
The Oracle Instance: Consists of the memory components of Oracle and various background processes.
The Oracle database: This is the centralized repository where the data is stored. It has a physical structure that is visible to the Operating system made up of operating system files and a logical structure that is recognized only the Oracle Server.

Differentiate between PL/SQL functions and procedures.



        i.     Procedure is named PL/SQL block which perform one or more task but function performs a specific task.
     ii.            Procedure may or may not return value whereas function should return one value.
   iii.            Function can be called from SQL statements whereas procedure cannot be called from the SQL statements
   iv.            We can call a stored procedure within function but we cannot call function within stored procedure.
      v.            Functions are normally used for computations whereas procedures are normally used for executing business logic.
   vi.            You can have DML (insert update delete) statements in a function. But you cannot call such a function in a SQL query.
 vii.            Function returns 1 value only. Procedure can return multiple values (max 1024).
viii.            Stored procedure returns always integer value by default zero. Whereas function returns type could be scalar or table or table values
    ix.            Stored procedure is precompiled execution plan whereas functions are not.
A procedure may modify an object where a function can only return a value The RETURN statement immediately completes the execution of a subprogram and returns control to the caller.

Write a PL/SQL code for find Sum of N numbers using WHILE Loop.


DECLARE
    num NUMBER:='&Input_Number';
    res NUMBER:=0;
BEGIN
    WHILE(num>0)LOOP
        res:=res+num;
        num:=num-1;
    EXIT WHEN num=0;
    END LOOP;
    DBMS_OUTPUT.PUT_LINE(res||' Is Total !');
END;
/
Enter value for input_number: 5

15 Is Total !

PL/SQL procedure successfully completed.

Sunday 10 February 2013

Draw the Data flow diagrams of Order processing and explain it in brief

Data flow model is a way of showing how data is processed by a system. At the analysis level, they should be used to model the way in which data is processed in the existing system. The notation used in this models represents functional processing, data stores and data movements between functions. Data flow models are used to show how data flows through a sequence of processing steps. The data is transformed at each step before moving on to the next stage. These processing steps or transformations are program functions when data-flow diagrams are used to document a software design.

The model shows how the order for the goods moves from process to process. It also shows the data stores that are involved in this process.
There are various notation used for data-flow diagrams. In figure rounded rectangles represent processing steps, arrow annotated with the data name represent flows and rectangles represent data store (data sources). Data flow diagrams have the advantages that, unlike some other modelling notations, they are simple and intuitive. 

Discuss the reuse of software at a different levels

The reuse of software can consider at a number of different levels:
  1. Application system reuse: The whole of an application system may be reused. The key problem here is ensuring that the software is portable; it should execute on several different platforms.
  2. Sub-system reuse: Major sub-system of an application may be reused. For example, a pattern-matching system developed as part of a text processing system may be reused in a database management system.
  3. Module or Object reuse: Components of a system representing a collection of function may be reused. For example, an Ada package or a C++ object implementing a binary tree may be reused in different applications.
  4. Function reuse: Software components, which implement a single function, such as a mathematical function, may be reused.

Discuss the four aspects to fault tolerance

There are four aspects of fault tolerance:
        i.            Failure detection: The system must detect a particular state combination has resulted or will result in a system failure.
        ii.            Damage assessment: The parts of the system state, which have been affected by the failure, must be detected.
       iii.            Fault Recovery: The system must restore its state to a known ‘safe’ state. This may achieved by correcting by correcting the damaged or by restoring the system to a known ‘safe’ state.
     iv.     Fault repair: This involves modifying the system so that the fault does not recur. In many cases, software failures are transient and due to a peculiar combination of system inputs. No repair is necessary as normal processing can resume immediately after fault recovery. This is an important distinction between hardware and software faults.

What is Software reliability? Why reliability is more important than efficiency


Software reliability is the function of the number of failures experienced by a particular user of that software. A software failure occurs when the software is executing.
Reliability is more important than efficiency by the following reason:
        i.            Computers are now cheap and fast: There is little need to maximize equipment usage. Paradoxically, however, faster equipment leads to increasing expectations on the part of the user so efficiency considerations cannot be completely ignored.
     ii.            Unreliable software is liable to be discarded by use: If a company attains a reputation for unreliability because of single unreliable product, it is likely to affect future sales of all of that company’s products.
   iii.            System failure costs may be enormous: For some application , such a reactor control system or an aircraft navigation system, the cost of system failure is orders of magnitude greater than the cost of the control system.
   iv.            Unreliable systems are difficult to improve: It is usually possible to tune an inefficient system because most execution time is spent in small program sections. An unreliable system is more difficult to improve as unreliability tends to be distributed throughout the system.
      v.            Efficiency is predictable:  Program takes long time to execute and users can adjust their work to take this into account. Unreliability, by contrast, usually surprises the user.
Unreliable system may cause information loss: Information is very expensive to collect and maintains; it may sometimes be worth more than the computer system on which it is processed.

Explain briefly about the incremental development model


The incremental model combines elements of the linear sequential model with the iterative of prototyping. Each linear sequence produces a deliverable “increment” of the software. For example: word processing software developed using the incremental paradigm might deliver basic file file management, editing and and document production function in the first increments; more sophisticated editing and document production capabilities in the second increment. Spelling and grammar checing in the third increment; and advances page layout capability in the fourth increment.
Incremental Development is based on use cases or use case flows which define working pieces of functionality at the user level. Within an 'Increment', the models required to develop a working software increment are each incremented until a working, tested executing piece of software is produced with incremental functionality. This approach:
·         Improves estimation, planning and assessment. Use cases provide better baselines for estimation than traditionally written specifications. The estimates are continuously updated and improved throughout the project.
·         Allows risks to the project to be addressed incrementally and reduced early in the life cycle  Early increments can be scheduled to cover the most risky parts of the architecture. When the architecture is stable, development can be speeded up.
Benefits users, managers and developers who see working functionality early in the lifecycle. Each increment is, effectively, a prototype for the next increment.

Discuss the Limitation of the linear sequential model in software engineering

The following are the limitation of linear sequential model in software engineering:
·         It assumes the requirements of a system which can be frozen before the design begins.
·         Freezing the requirements usually requires choosing the hardware. A large project might take a few years to complete.
·         The waterfall model stipulates that the requirements be completely specified before the rest of the development can proceed.
·         It is a document driven process that requires formal documents at the end of each phase. This approach tends to make the process documentation heavy and is net suitable for many applications.
·         It is difficult for the customers to state the requirements clearly at the beginning. There is always certain degree of natural uncertainty at beginning of each project.
·         Difficult and costlier to change when the changes occur at later stages.
Customer can see the working version only at the end. Thus any changes suggested here are not only difficult to incorporate but also expensive. This may result in disaster if any undetected problems are precipitated to this stage.

List the applications of software


The applications of software are listed below:
Ø  System Software
Ø  Business Software
Ø  Real Time Software
Ø  Engineering Software
Ø  Embedded Software
Ø  Personal computer Software
Ø  Web-based Software
Ø  Artificial intelligence Software
Ø  Software Crisis
Ø  System Software:
System software:
It is computer software or an operating system designed to operate and control the computer hardware and to provide a platform for running application software.
Real Time Software:
Real-time software enables the user to execute various task and activities all at the same time, as long as the programs are kept open. In computer systems, real-time operating systems accommodate a multitude of programs to run and operate even if the user is focused only on just one application. Some of these software programs are also designed to fulfill scheduled tasks, thus, even if not opened, they automatically respond to the computer's time clock and do the tasks given to them.
As for example, Real-time software programs can be found in various applications. Some of them are known as anti-virus programs, which perform scheduled maintenance checks, as well as database applications like airline database controls, and 24-hour transaction facilities.
Business Software:
Business software or business application is any software or set of computer programs that are used by business users to perform various business functions. These business applications are used to increase productivity, to measure productivity and to perform business functions accurately.
Some business applications are interactive i.e. they have a graphical user interface or user interface and user can query/modify/input data and view results instantaneously. They can also run reports instantaneously.
Engineering and scientific Software:
It is used for such fields as automated manufacturing, molecular modelling, volcanology, and construction. Engineering and scientific software is the set of tools used to design, simulate and analyse civil engineering structures such as bridges, roads and buildings. Scientific software is typically used to solve differential equations. Although some differential equations have relatively simple mathematical solutions, exact solutions of many differential equations are very difficult to obtain.    
Embedded Software:
Embedded software is computer software, written to control machines or devices that are not typically thought of as computers. It is typically specialized for the particular hardware that it runs on and has time and memory constraints. Manufacturers 'build in' embedded software in the electronics in cars, telephones, modems, robots, appliances, toys, security systems, pacemakers, televisions and set-top boxes, and digital watches.
Personal Computer Software:
The software which we are using in computer is called personal computer software. Day to day useful application like MS Word, spread sheets, multimedia, database management, personal and business financial applications are some of the common examples for personal computer software.
Artificial Intelligence Software:
Artificial Intelligence Software makes use of non-numerical algorithms to solve complex problems that are not amenable to computation or straightforward analysis. Expert systems, also called knowledge based system, pattern recognition, game playing are representative examples of applications within this category.
Software Crisis:
Software crisis was a term used in the early days of computing science. The term was used to describe the impact of rapid increases in computer power and the complexity of the problems that could be tackled. In essence, it refers to the difficulty of writing correct, understandable, and verifiable computer programs. The roots of the software crisis are complexity, expectations, and change.