Friday, October 31, 2008

Operating System Concepts

New Concepts
Linux For Beginners
Learn Unix
Unix For Beginners


Other Os Concepts

Operating System Statistics
Operating System Statistics - Latest Review
Basic Operating System Concepts
Basic OS Concepts Blog
Operating System Comparison
Exact Comparison
Mac OS Leopard Features
Mac Features
Google Operating System
Goobuntu is a Linux distribution based on Ubuntu that Google uses internally.
Unix OS Concepts
About Unix Operating System
Windows Vista Features
Features In Vista
Unix Programs
Unix Basic Programs
Sun Cluster System
Sun Cluster System Hardware and Software Components (Solaris)
AIX Operating System
About IBM AIX
AIX Toolbox
AIX Toolbox For Linux Applications
Linux Operating System
A Linux Guide
About Vista
About Vista Opearting System
Java For Mac
Java Development Kit (JDK) For Mac


Saturday, July 12, 2008

HIBERNATE - Features of Hibernate


HIBERNATE - Features of Hibernate

Transparent persistence without byte code processing
Transparent persistence
JavaBeans style properties are persisted
No build-time source or byte code generation / processing
Support for extensive subset of Java collections API
Collection instance management
Extensible type system
Constraint transparency
Automatic Dirty Checking
Detached object support
Object-oriented query language
Powerful object-oriented query language
Full support for polymorphic queries
New Criteria queries
Native SQL queries
Object / Relational mappings
Three different O/R mapping strategies
Multiple-objects to single-row mapping
Polymorphic associations
Bidirectional associations
Association filtering
Collections of basic types
Indexed collections
Composite Collection Elements
Lifecycle objects
Automatic primary key generation
Multiple synthetic key generation strategies
Support for application assigned identifiers
Support for composite keys
Object/Relational mapping definition
XML mapping documents
Human-readable format
XDoclet support
HDLCA (Hibernate Dual-Layer Cache Architecture)
Thread safeness
Non-blocking data access
Session level cache
Optional second-level cache
Optional query cache
Works well with others
High performance
Lazy initialization
Outer join fetching
Batch fetching
Support for optimistic locking with versioning/timestamping
Highly scalable architecture
High performance
No "special" database tables
SQL generated at system initialization time
(Optional) Internal connection pooling and PreparedStatement caching
J2EE integration
JMX support
Integration with J2EE architecture (optional)
New JCA support

Java Features In 9.21.


The following Java features were introduced in Version 9.21.

JVM 1.2 Support in J/Foundation

Dynamic Server supports Version 1.2 of the Java Virtual Machine (JVM).

Default Values of Java Configuration Parameters

The default values of the JDKVERSION, JVPJAVAHOME, JVPJAVALIB, and JVPJAVAVM parameters in the ONCONFIG file have changed for Dynamic Server.

JDBC 2.0 Support

IBM Informix JDBC driver is bundled with Embedded SQLJ 1.10.1.JC1, a product for embedding SQL statements in Java. Dynamic Server supports the following JDBC 2.0 features:
  • Complex data types
  • Collections
  • Scrollable cursors
  • Batch updates
  • Interval data types
  • Extensions to prepared statement
  • Callable statements

GLS Support for J/Foundation

Dynamic Server supports the following GLS features:
  • CLIENT_LOCALEDB_LOCALEGL_DATEGL_DATETIME, and DBTIME, environment variables
  • New connection properties (NEWLOCALE and NEWCODESET) for mapping a locale or code set in the JDBC driver

update_jars.sql Script

Use the update_jars.sql script to update the names of jar files in a database after you rename the database.

Java Runtime Environment Variables

Dynamic Server supports the JVM_MAX_HEAP_SIZEJAR_TEMP_PATHJAVA_COMPILER, andAFDEBUG environment variables.

Partial Support for Variable-Length Opaque-Types

You can now write UDRs and DataBlade modules in Java.
Dynamic Server supports the following items:
  • Variable-length opaque data types
  • Data I/O conversion routines:
    • input/output
    • send/receive
    • import/export
    • importbin/exportbin

References to J/Foundation Features

For more information on J/Foundation features, see these manuals.
TaskManual
Use JVM 1.2. Use JCBC 2.0 features. Write UDRs and DataBlade modules in Java.J/Foundation Developer's Guide
Specify Java environment variables.J/Foundation Developer's Guide
IBM Informix Guide to SQL: Reference
Specify Java configuration parameters.J/Foundation Developer's Guide
IBM Informix Dynamic Server Administrator's Reference
Set GLS environment variables. Use the connection properties.J/Foundation Developer's Guide
IBM Informix GLS User's Guide
Use the update_jars.sql script.IBM Informix Guide to SQL: Syntax

Java For Mac

Java

Getting Started

A guided introduction and learning path for developers new to Java in Mac OS X.
Mac OS X is the only major consumer operating system that comes complete with a fully configured and ready-to-use Java runtime and development environment. Professional Java developers are increasingly turning to the feature-rich Mac OS X as the operating system of choice for both Mac-based and cross-platform Java development projects. Mac OS X includes the full version of J2SE 1.5, pre-installed with the Java Development Kit (JDK) and the HotSpot virtual machine (VM), so you don't have to download, install, or configure anything.
Deploying Java applications on Mac OS X takes advantage of many built-in features, including 64-bit support, resolution independence, automatic support of multiprocessor hardware, native support for the Java Accessibility API, and the native Aqua look and feel. As a result, Java applications on Mac OS X look and perform like native applications on Mac OS X. Read more...

Java for Mac OS X 10.5, Update 1

Java for Mac OS X 10.5, Update 1 is now available via Software Update.  This update adds Java SE 6 version 1.6.0_05 to 64-bit Intel Macs.
For more details on this update, visit:
http://docs.info.apple.com/article.html?artnum=307403

Java Articles

Featured Content

Java on Mac OS X Leopard
Java in Leopard brings improvements such as better performance, 64-bit computing, and new tools to better analyse your Java code. Read this article to see how Leopard provides the best possible Java experience on Mac OS X while preserving compatibility, and supporting the latest hardware. .

Related Links

Java Reference Library

Fundamentals

Essential information for developers using the Mac OS X built-in Java support.

API Reference

Java programming specifications, organized by package and class.

Mailing Lists

Macintosh Java-Dev Mailing List

Mailing list run by Apple about Java development on Mac OS.

Business Resources

Mac Market

Business Development

Product Development

Distribution

Promotion

Mac Products Guide

Java Technology Features


Here we list the basic features that make Java a powerful and popular programming language:
  • Platform Independence
    • The Write-Once-Run-Anywhere ideal has not been achieved (tuning for different platforms usually required), but closer than with other languages.
  • Object Oriented
    • Object oriented throughout - no coding outside of class definitions, including main().
    • An extensive class library available in the core language packages.
  • Compiler/Interpreter Combo
    • Code is compiled to bytecodes that are interpreted by a Java virtual machines (JVM) .
    • This provides portability to any machine for which a virtual machine has been written.
    • The two steps of compilation and interpretation allow for extensive code checking and improved security.
  • Robust
    • Exception handling built-in, strong type checking (that is, all data must be declared an explicit type), local variables must be initialized.
  • Several dangerous features of C & C++ eliminated:
    • No memory pointers 
    • No preprocessor
    • Array index limit checking
  • Automatic Memory Management
    • Automatic garbage collection - memory management handled by JVM.
  • Security
    • No memory pointers
    • Programs runs inside the virtual machine sandbox.
    • Array index limit checking
    • Code pathologies reduced by
      • bytecode verifier - checks classes after loading
      • class loader - confines objects to unique namespaces. Prevents loading a hacked "java.lang.SecurityManager" class, for example.
      • security manager - determines what resources a class can access such as reading and writing to the local disk.
  • Dynamic Binding
    • The linking of data and methods to where they are located, is done at run-time.
    • New classes can be loaded while a program is running. Linking is done on the fly.
    • Even if libraries are recompiled, there is no need to recompile code that uses classes in those libraries.

      This differs from C++, which uses static binding. This can result in fragile classes for cases where linked code is changed and memory pointers then point to the wrong addresses.
  • Good Performance
    • Interpretation of bytecodes slowed performance in early versions, but advanced virtual machines with adaptive and just-in-time compilation and other techniques now typically provide performance up to 50% to 100% the speed of C++ programs.
  • Threading
    • Lightweight processes, called threads, can easily be spun off to perform multiprocessing.
    • Can take advantage of multiprocessors where available
    • Great for multimedia displays.
  • Built-in Networking
    • Java was designed with networking in mind and comes with many classes to develop sophisticated Internet communications.
Features such as eliminating memory pointers and by checking array limits greatly help to remove program bugs. The garbage collector relieves programmers of the big job of memory management. These and the other features can lead to a big speedup in program development compared to C/C++ programming.

Sunday, June 29, 2008

Java Featutes

Java Featutes
  • Platform Independence
    • The Write-Once-Run-Anywhere ideal has not been achieved (tuning for different platforms usually required), but closer than with other languages.

  • Object Oriented
    • Object oriented throughout - no coding outside of class definitions, including main().
    • An extensive class library available in the core language packages.

  • Compiler/Interpreter Combo
    • Code is compiled to bytecodes that are interpreted by a Java virtual machines (JVM) .
    • This provides portability to any machine for which a virtual machine has been written.
    • The two steps of compilation and interpretation allow for extensive code checking and improved security.

  • Robust
    • Exception handling built-in, strong type checking (that is, all data must be declared an explicit type), local variables must be initialized.

  • Several dangerous features of C & C++ eliminated:
    • No memory pointers
    • No preprocessor
    • Array index limit checking

  • Automatic Memory Management
    • Automatic garbage collection - memory management handled by JVM.

  • Security
    • No memory pointers
    • Programs runs inside the virtual machine sandbox.
    • Array index limit checking
    • Code pathologies reduced by
      • bytecode verifier - checks classes after loading
      • class loader - confines objects to unique namespaces. Prevents loading a hacked "java.lang.SecurityManager" class, for example.
      • security manager - determines what resources a class can access such as reading and writing to the local disk.

  • Dynamic Binding
    • The linking of data and methods to where they are located, is done at run-time.
    • New classes can be loaded while a program is running. Linking is done on the fly.
    • Even if libraries are recompiled, there is no need to recompile code that uses classes in those libraries.

      This differs from C++, which uses static binding. This can result in fragile classes for cases where linked code is changed and memory pointers then point to the wrong addresses.

  • Good Performance
    • Interpretation of bytecodes slowed performance in early versions, but advanced virtual machines with adaptive and just-in-time compilation and other techniques now typically provide performance up to 50% to 100% the speed of C++ programs.

  • Threading
    • Lightweight processes, called threads, can easily be spun off to perform multiprocessing.
    • Can take advantage of multiprocessors where available
    • Great for multimedia displays.

  • Built-in Networking
    • Java was designed with networking in mind and comes with many classes to develop sophisticated Internet communications.
Features such as eliminating memory pointers and by checking array limits greatly help to remove program bugs. The garbage collector relieves programmers of the big job of memory management. These and the other features can lead to a big speedup in program development compared to C/C++ programming.

Saturday, June 28, 2008

KEY JAVA FEATURES


KEY JAVA FEATURES


Java is:
Simple, Object-oriented, Distributed, Interpreted, Robust, Secure, Architecturally Neutral, Portable, High-Performance, Dynamic.


JAVA IS SIMPLE

Java was developed by taking the best points from other programming languages, primarily C and C++. Java therefore utilises algorithms and methodologies that are already proven. Error prone tasks such as pointers and memory management have either been eliminated or are handled by the Java environment automatically rather than by the programmer. Since Java is primarily a derivative of C++ which most programmers are conversant with, it implies that Java has a familiar feel rendering it easy to use.


JAVA IS OBJECT-ORIENTED

Even though Java has the look and feel of C++, it is a wholly independent language which has been designed to be object-oriented from the ground up. In object-oriented programming (OOP), data is treated as objects to which methods are applied. Java's basic execution unit is the class. Advantages of OOP include: reusability of code, extensibility and dynamic applications.


JAVA IS DISTRIBUTED

Commonly used Internet protocols such as HTTP and FTP as well as calls for network access are built into Java. Internet programmers can call on the functions through the supplied libraries and be able to access files on the Internet as easily as writing to a local file system.


JAVA IS INTERPRETED

When Java code is compiled, the compiler outputs the Java Bytecode which is an executable for the Java Virtual Machine. The Java Virtual Machine does not exist physically but is the specification for a hypothetical processor that can run Java code. The bytecode is then run through a Java interpreter on any given platform that has the interpreter ported to it. The interpreter converts the code to the target hardware and executes it.


JAVA IS ROBUST

Java compels the programmer to be thorough. It carries out type checking at both compile and runtime making sure that every data structure has been clearly defined and typed. Java manages memory automatically by using an automatic garbage collector. The garbage collector runs as a low priority thread in the background keeping track of all objects and references to those objects in a Java program. When an object has no more references, the garbage collector tags it for removal and removes the object either when there is an immediate need for more memory or when the demand on processor cycles by the program is low.


JAVA IS SECURE

The Java language has built-in capabilities to ensure that violations of security do not occur. Consider a Java program running on a workstation on a local area network which in turn is connected to the Internet. Being a dynamic and distributed computing environment, the Java program can, at runtime, dynamically bring in the classes it needs to run either from the workstation's hard drive, other computers on the local area network or a computer thousands of miles away somewhere on the Internet. This ability of classes or applets to come from unknown locations and execute automatically on a local computer sounds like every system administrator's nightmare considering that there could be lurking out there on one of the millions of computers on the Internet, some viruses, trojan horses or worms which can invade the local computer system and wreak havoc on it.
Java goes to great lengths to address these security issues by putting in place a very rigorous multilevel system of security:
  • First and foremost, at compile time, pointers and memory allocation are removed thereby eliminating the tools that a system breaker could use to gain access to system resources. Memory allocation is deferred until runtime.
  • Even though the Java compiler produces only correct Java code, there is still the possibility of the code being tampered with between compilation and runtime. Java guards against this by using the bytecode verifier to check the bytecode for language compliance when the code first enters the interpreter, before it ever even gets the chance to run.
    The bytecode verifier ensures that the code does not do any of the following:
    • Forge pointers
    • Violate access restrictions
    • Incorrectly access classes
    • Overflow or underflow operand stack
    • Use incorrect parameters of bytecode instructions
    • Use illegal data conversions
  • At runtime, the Java interpreter further ensures that classes loaded do not access the file system except in the manner permitted by the client or the user.
Sun Microsystems will soon be adding yet another dimension to the security of Java. They are currently working on a public-key encryption system to allow Java applications to be stored and transmitted over the Internet in a secure encrypted form.


JAVA IS ARCHITECTURALLY NEUTRAL

The Java compiler compiles source code to a stage which is intermediate between source and native machine code. This intermediate stage is known as the bytecode, which is neutral. The bytecode conforms to the specification of a hypothetical machine called the Java Virtual Machine and can be efficiently converted into native code for a particular processor.


JAVA IS PORTABLE

By porting an interpreter for the Java Virtual Machine to any computer hardware/operating system, one is assured that all code compiled for it will run on that system. This forms the basis for Java's portability.
Another feature which Java employs in order to guarantee portability is by creating a single standard for data sizes irrespective of processor or operating system platforms.


JAVA IS HIGH-PERFORMANCE

The Java language supports many high-performance features such as multithreading, just-in-time compiling, and native code usage.
  • Java has employed multithreading to help overcome the performance problems suffered by interpreted code as compared to native code. Since an executing program hardly ever uses CPU cycles 100 % of the time, Java uses the idle time to perform the necessary garbage cleanup and general system maintenance that renders traditional interpreters slow in executing applications. [NB: Multithreading is the ability of an application to execute more than one task (thread) at the same time e.g. a word processor can be carrying out spell check in one document and printing a second document at the same time.]
  • Since the bytecode produced by the Java compiler from the corresponding source code is very close to machine code, it can be interpreted very efficiently on any platform. In cases where even greater performance is necessary than the interpreter can provide, just-in-time compilation can be employed whereby the code is compiled at run-time to native code before execution.
  • An alternative to just-in-time compilation is to link in native C code. This yields even greater performance but is more burdensome on the programmer and reduces the portability of the code.

JAVA IS DYNAMIC

By connecting to the Internet, a user immediately has access to thousands of programs and other computers. During the execution of a program, Java can dynamically load classes that it requires either from the local hard drive, from another computer on the local area network or from a computer somewhere on the Internet.

Friday, June 27, 2008

Hibernate (Java)

Hibernate (Java)

Written in Java
OS Cross-platform (JVM)
Platform Java Virtual Machine
Genre Object-relational mapping

Website http://www.thiyagaraajblog.tk

Hibernate is an object-relational mapping (ORM) library for the Java language, providing a framework for mapping an object-oriented domain model to a traditional relational database. Hibernate solves Object-Relational impedance mismatch problems by replacing direct persistence-related database accesses with high-level object handling functions. The Hibernate 2.1 framework won a Jolt Award in 2005 [1]

Hibernate is free as open source software that is distributed under the GNU Lesser General Public License.

Feature summary

Hibernate's primary feature is mapping from Java classes to database tables (and from Java data types to SQL data types). Hibernate also provides data query and retrieval facilities. Hibernate generates the SQL calls and relieves the developer from manual result set handling and object conversion, keeping the application portable to all SQL databases, with database portability delivered at very little performance overhead.

Hibernate provides transparent persistence for Plain Old Java Objects (POJOs). The only strict requirement for a persistent class is a no-argument constructor, not compulsorily public. (Proper behavior in some applications also requires special attention to the equals() and hashCode() methods.[1])

Hibernate provides a dirty checking feature that avoids unnecessary database write actions by performing SQL updates only on the modified fields of persistent objects.

Hibernate can be used both in standalone Java applications and in Java EE applications using servlets or EJB session beans.

History

Hibernate was developed by a team of Java software developers around the world led by Gavin King. JBoss, Inc. (now part of Red Hat) later hired the lead Hibernate developers and worked with them in supporting Hibernate.

The current version of Hibernate is Version 3.x . This version has new features like a new Interceptor/Callback architecture, user defined filters, and JDK 5.0 Annotations (Java's metadata feature). Hibernate 3 is also very close to the EJB 3.0 specification (although it was finished before the EJB 3.0 specification was released by the Hibernate wrapper for the Core module which provides conformity with the JSR 220 JPA Entity Manager standard).

Application programming interface

The Hibernate API is provided in the Java package org.hibernate.

org.hibernate.SessionFactory interface

References immutable and threadsafe object creating new Hibernate sessions. Hibernate-based applications are usually designed to make use only of a single instance of the class implementing this interface (often exposed using a singleton design pattern).

org.hibernate.Session interface

Represents a Hibernate session i.e. the main point of the manipulation performed on the database entities. The latter activities include (among the other things) managing the persistence state (transient, persisted, detached) of the objects, fetching the persisted ones from the database and the management of the transaction demarcation.

Session is intended to last as long as the logical transaction on the database. Due to the latter feature Session implementations are not expected to be threadsafe nor to be used by multiple clients

Tuesday, January 22, 2008

Basic Os Concepts

Basic Operating System Concepts:

All Operating systems have some common functions. The older the OS, the fewer functions. Figures 5.1 thru 5.5 show several views of 'the operating system', with schematics showing the perspective from: User Interface; RAM; the *ix Shell visualtization; a Windows GUI shell; & with a generic 'command processor' loaded in RAM. The text aggregates the functions into five basic components: User interface, device management, file management, memory management, processor management. Here's a system analyst's template that pretty much summed up the devices and media used in systems of the 60s: punched card & magnetic tape were used copiously; 'display', magnetic disk & drum, and 'terminal interrupt' were used less often. Those 'manual process' blocks got heavy workout, usually meaning that some employee needed to stand in front of a card sorter & tend 'gang punches' and other heavy machinery used to read and punch records onto cards.

From the mid-70s onward, a template this size can't hold symbols for all the i/o and other devices of which the modern OS must be 'aware': mouse, control keys, USB, CD, DVD, ZipDisk, game controllers, serial ports, ethernet, &c, &c have been interfaced with our popular OSs that make new technology easily accessible to application developers and users.
Standardization in how the devices' controllers work often allows newer, more efficient technology to be deployed without any changes in the OS's device drivers. Examples of this are seen in devices that attach to PCI bus, IDE controllers, SCSI, USB, and other hardware interface interfaces in use today. A modern OS goes well beyond static textbook schematics in complexity, and when multiprogramming techniques like SMP are added into the mix the OSs operations become so complex that schematics can't show all that's going on and it's hard for us to draw a picture of what's going on. There's an old axiom about this: 'the computer doesn't have to draw lines'. Instead, computers use data structures like stacks, linked lists, 'process status words', and other techniques to keep track of it all. OS developers must be able to visualize, or otherwise make a 'mental map' of, the data structures and instructions that were hinted at in the last chapters, when they write the memory & processor management schemes that they can tailor the OS to service each device supported by their platform.
Here are important concepts from the text:
The User Interface (1) is the most important OS component as far as we _users_ are concerned. It might be graphical, or might be a character-based. Most systems today provide a mix.Figure 5.1 shows what the UI makes available for a system's users. That Device Management (2) box is right full these days. File, memory, and processor management functions (3, 4, & 5) round out the 'classic' OSs 5 components, and each is optimized for the particular platform at hand. In this figure, the text puts the UI at the top of the hierarchy, and shows the four other OS functions under the UI.
Another important OS interface does _not_ interface with the User. Application Program Interfaces (APIs) are alternate interfaces that software developers provide so that _programs_ can use programs as well as users. For example, Microsoft's Excel provides APIs so that a script written in VisualBasic or VB.NET can easily open an Excel workbook and read/write on the worksheets, using any formulas or macros that may be in the worksheets' cells.
The Shell interfaces the computer with that most complex of 'devices', the User. A shell accepts users' commands in the format and syntax of its 'command language' (which might include mouseclicks) and can give some kind of error message if an improperly formatted or unknown command is issued, or if the user doesn't have sufficient privilege to use the command. These days, shells are either 'character based' or 'GUI' (Graphical User Interface), as in Windows, a Linux workstation running Gnome or KDE, or a Mac.
A character-based shell generally uses a 'command line' interface where the commands typed are generally the name of a program to load and run. A Linux 'console session' uses a command line interface, and may be entered on a PC's monitor, or remotely from another Linux machine or a Windows machine running 'terminal emulation software' like putty.exe or Windows' secure shell client.A prompt ($, %, # are common in unix) lets the user know that the system is waiting for their command. Users type in their command a character at a time, perhaps assistied by 'tab completion' or 'scrolling thru a command stack', and commands are 'submitted' to the shell when the user hits the Enter key. On Windows machines, the norm is to use the GUI, discussed next, but there are occasions in system or network management when a command line interface is more desirable than the GUI. Windows 95 & 98 continued to provide DOS 'underneath' Windows and the icon to get to a Windows command line is labelled 'DOS Command Line' (needs verification). On XP, there is no more DOS, per se, although DOS shell commands and scripts are generally supported. Now, the command line appears when you choose the 'Command Window'.
A graphical-based shell (GUI) adds options where the user can double-click on an icon, or click on a link or menu choice, or use a 'keyboard shortcut' to call up a program in these 'object oriented' environments. A command line is available, but most users use the GUI. When a choice has been made, the OS looks at the properties of the icon or shortcut to discover the 'target' OS function or program to load and run for the user. (In Windows, right-click on an icon to see its properties. The target will likely be a full path to an executable or batch file that the OS can run.)
Commands might be 'built into' the OS (, or they might be files kept in directories on the user's 'path'. For example: in Windows the 'copy' command is built into the OS and doesn't have a separate file for the command; the 'format' command's code is kept in c:\Windows\System32 along with other external commands. In Linux, practically all the commands are external and the most commonly used are in /bin, with those likely to be used by a super user kept in /sbin. When a command has been issued by a user, the OS first checks to see if the command is built into the OS, as many of the most primitive functions are. If it is, it's executed instantly.Otherwise, it looks on the user's 'path', in order, to find a match. The path is part of a user's 'environment' and the OS starts searching for external commands at the beginning of a user's path and executes the first match it encounters.In windows, open a dos or command prompt window and type 'path' to see the path that you have, or your systems administrator has, set up for you. In Linux, type 'set' to see all the setting for your environment, and find the PATH line.We'll take a side-trip in class looking at commands, and will make new commands by editing 'batch files' on Windows XP and Linux platforms.
Device Management functions allow the computer to communicate with a platform's 'peripheral' devices like disk drives, network interfaces, serial ports, or devices on a SCSI bus or USB. Notice that memory & processor are not peripheral, they are at the center of the CPU's influence, and get their own OS functions. The best news about device management is that it is practically automatic on popular platforms, or they wouldn't be popular. Windows and desktop Linux distributions have processes (kudzu is one in Linux) that recognize that new devices have been added, or that old ones have been removed, and automatically adjust the OS configuration and load drivers to accommodate the changes. In Windows, the Device Manager is used to show and configure devices attached to the system. We'll look at this briefly in class. Linux keeps track of all its devices (and processes) in the /proc and /dev directories, and super users can use various programs, similar to Windows' Device Manager, to configure interfaces or load drivers for printers and other devices that need them. We'll look at some of these, too.Along with other descriptive stuff, the text addresses two concepts I'd like to discuss further: IOCS (I/O Control System), Logical vs. Physical I/O, and Interrupts.Desktop PCs used to support Windows & Linux platforms have a BIOS (Basic I/O System) to handle the 'basic' devices used in the bootup process: keyboard, mouse, disk, & monitor. The BIOS is a relatively simple program burned onto a ROM or EPROM, that starts running when the mainboard is first powered up, or when it is reset. The BIOS has enough intelligence to display bootup progress and watch the keyboard for a user's keystrokes like 'delete key' or 'F8' that have significance for the BIOS at hand. The BIOS knows how to find a 'bootable device' (which may be a CD, floppy, or hard disk) and start the software on that device, which is usually an OS like Windows or Linux that can handle more of the less basic I/O. An IOCS is made up of OS code with primitive commands to communicate directly with controllers for all the peripheral devices likely to be attached to a platform. This way, application programmers can usually write their programs to issue 'logical requests' for I/O without having to know the primitives that handle 'physical I/O' on the devices their code accesses.Think back to the 'cylinder, surface, sector' organization of blocked data on a hard disk. The portion of the IOCS that handles disk access knows how to take a program's open & read statements (logical I/O), convert them to the primitive commands understood by the disk controller, and move blocks of data among disk to RAM (physical I/O) where they are available for the script that is being executed.
Interrupts are signals passed to a CPU to let it know that a device or a process needs its attention. Hardware and software can 'raise an interrupt' that is (as) immediately (as possible) processed by a CPU according to the protocol that has been established for that platform. For example, when network traffic arrives over a LAN the NIC buffers the incoming traffic and 'raises an interrupt' so that the CPU can process the data. Whenever you touch a key on a PC's keyboard an interrupt is generated so that the OS can handle the keystroke for you.'Hardware Interrupts' are some of the most limited 'real estate' on any platform since each is represented by a trace on the bust. An intel 8086 processor only provided 16 hardware interrupts, IRQ0 thru IRQ15. Here is a listing of hardware and software interrupts for the 8086. It was easy to have 'conflicts in IRQ settings' on PCs with lots of interface cards in them. Today's Pentiums have more, and looking at the control panel, system folder, device manager, and choosing view by type or connection will show how the 24 (?need to verify this) interrupts are assigned on your PC. Also, the PCI bus and BIOS can do some 'IRQ Steering' automatically to keep network admins from pulling out their hair.There are many more 'software interrupts' available than hardware, as you may notice in the above listing.Interrupt processing is relatively 'expensive' for a CPU since whatever the CPU is doing is literally interrupted when an interrupt is received. When multiple interupts are generated at nearly the same instant they are queued.This is a great simplification of 'interrupt processing': the OS 1) saves the status of the 'current' program it is running and may have to save any data in the CPU that is associated with it, 2) transfers control to an appropriate I/O routine for the device on that interrupt, 3) handles the interrupt request, and 4) after the interrupt is serviced the current program is reloaded and processing continues. How and where interrupts are processed varies depending on a platform's configuration. For example, a desktop PC running Windows passes an interrupt to the CPU for every keystroke a user makes. Keystrokes made in a system running on a host minicomputer with dedicated 'terminal I/O controllers' interrupt an I/O controller, which responds by storing the keystroke in a buffer and echoing it to the users' displays. Most of these 'intelligent I/O controllers' can handle more complex tasks, like the backspace or other edit keys. The processor on the I/O controller interrupts the CPU only when users hit their Enter keys. Then the contents of the buffer for that keyboard are transmitted all at once to the CPU when it acknowledges the interrupt. Disk, network, tape, and other controllers on larger machines are likely to have dedicated microprocessors that perform similar to an intelligent keyboard controller. They can handle interrupts from the devices on their channel until everything has been queued up to send along to a CPU, which only gets one interrupt instead of dozens or hundred handled by the peripheral controller.
File Systems are associated with platforms. A particular platform may support several files systems that may sound familiar, and some platforms can accommodate 'foreign' file systems. Linux distributions currently use the ext3 file system, but there are other options available to a system administrator when a disk drive is being formatted. Windows' single user OSs have used FAT (File Allocation Table), and more lately FAT32 as their file systems. Windows' servers use NTFS, which adds features needed to secure and backup files in a multi-user environment. A 'hybrid' Windows like XP Pro allows disk partitions to be formatted as either FAT32 or NTFS, and some users keep the FAT because it's sometimes faster since it doesn't have as much to do. Longhorn will come with an entirely new file system when it's released that will facilitate the searching and browsing we'd like to see easier and faster .File system management functions extend at the 'logical side' of the IOCS, and allow the OS and application programs to reference data files by 'directory path' and 'name' and leave all the physical & logical i/o to the OS. Directories, or folders, are a common feature on most OSs. A 'disk directory' forms the 'root' of a 'tree structured' directory in most of them. The file system knows how to search this tree very quickly, finding a file by name and returning the drive, partition, cylinder, and sector where a file starts.A multi-user file system must be able to handle issues of 'file locking' or 'record locking' so that it is clearly evident when a file or record sought by one user is being updated by some other user. This isn't a problem for a single-user system like Windows98 running on a desktop PC.Linux/Unix extends the 'file system' concept to almost any device that can be attached to it can be opened, read from, and written to without too much concern by the application programmer. They also provide for the creation of 'named pipes' and 'sockets' via system bus, LAN, or The Internet so that programs, running on one machine or many, can easily move data among themselves without programmers having to be concerned with the physical i/o involved. Linux divides all devices into two classes: character devices handle 'streams' of data, like keyboards, usb, or a video camera; block devices transfer data in blocks, like disk, tape, or ramdisk (areas of RAM formatted like a file system). Block devices always transfer data in fixed-sized blocks. A tape drive block might be 16,384 bytes. A disk might be formatted for blocks of 512, 1024, 2048, or more bytes.Character devices transfer data one byte at a time and use some signal specified in the devices protocol to indicate where one data structure begins and another ends. Keyboards typically use ascii character 12 (carriage return, generated by the enter key) to signal the end of a line of text. Video devices, scanners, and other character devices use signals expected by their drivers. Programmers writing applications for the devicesSince all Unix devices are defined in a directory in 'the file system' (/dev) programmers can reference most devices in their code using similar notation. A data file might be named like /home/AStudent/web/index.html. A tape drive might be named like /dev/st0 or /dev/st1, literally 'SCSI Tape zero' and 'SCSI Tape One'. A serial port might be named like /dev/ttys1 or /dev/ttyS1, depending on how the device attached to it is confgured. For most purposes in Unix or Linux, the same 'open' and 'read' statements are used to get data from any of them into RAM where a program can use them.
The term Boot Process comes from the old adage: lift yourself up by your own bootstraps. 'Bootstrap loaders' are common on most platforms today. We just hit the power switch and the machine 'boots up.'In the good old days, a systems administrator for something like a DEC PDP-8 started one of these big beasts by hitting the power switch, but then nothing else happened. We had to toggle a 'starting address' into switches on the computer's chassis before hitting the 'load & run' button. The starting address might be for a tape drive if a new OS was being installed from a tape, or the address of the controller for the disk drive holding the OS for an ordinary day's run. (I stuck a red dot on the switches that needed to be pressed so I could talk somebody easily through the process if the machine had to be rebooted in my absence.)Today, boot processing is mostly automatic on most platforms, and on a Windows or Linux-based PC it is handled by the BIOS. The BIOS on a mainboard for an Intel processor lets the user set and save a 'bootup sequence' that will be followed whenever the machine is powered up or reset. It was common a few years back to keep it set to look for a 'bootable diskette' in A: first and boot with it if found. Then, C: (the hard drive) would be checked if the floppy drive is empty. Today, the sequence is more likely to go: CD/DVD, then floppy (if at all), and then C:. Either of these allow the user to 'boot to a floppy' or CD that will install software or hardware, or perhaps pillage the machine, without having to load the complete OS first.
Utilities are programs that are distributed with the OS to handle routine file system tasks. In Windows these are scattered among the accessories and system tools folders, and are used to backup, defrag, or format disks, or maybe recover 'lost' data. In Linux, there are several utilities for backing up data (cpio, tar, cc, & backup) depending on the requirements. Some utility programs may be purchased, perhaps because the do a better job than the one that distributed with the OS, or because the OS doesn't provide that utility. In Windows, for example, many systems administrators use Veritas or other 3rd party software to do backups of data since the backup utility in Windows is slow and clumsy to use. In Linux, a systems administrator who needs to backup a lot of machines might purchase software, or find an open source solution like amanda to automate the process.Windows users are used to buying virus scanners, which can be thought of as utility programs, to make up for the lack of one in Windows.
Memory ManagementThe text does a good job of outlining and detailing these concepts. Read Chapter 6 and ask questions as needed.Important basic terms are: resident vs. transient routines; concurrency, partitions & regions; segmentation, address translation, displacement, paging, memory protection.Overlay structures are important for putting more code thru memory than there is memory to handle it. Virtual Memory systems in multi-processing OSs (most of them today) use techniques like these along with 'paging' & 'swapping' that allows contents of 'real memory' to be 'paged in & out' of areas set aside on disk for the purpose. In a large host or server, paging is inevitable as the number of users increases. On a desktop system, it becomes a problem when 'too many windows are open' and we notice everything runs a _lot_ slower. An important practical matter about paging is that 'real memory' access is 1000s of times faster than 'virtual memory' access. Systems that have to do a lot of paging are slower than those that don't. Multi-entrant coding techniques are used to minimize paging and help keep system response snappy. Big mainframes can control swapping between memory and disk by using RAMs that are gigabytes or terabytes in size, but smaller CPUs don't have this advantage quite yet. Maybe next year, as the 64-bit Itanium and Athlon processors advance, we'll see huge RAMs on our PCs.Where paging is excessive it's referred to as 'thrashing'. Couple this thrashing with the interrupts generated by several users hitting a host's keyboards and the users will be posting those cartoons of a skeleton covered with spider webs and the caption "How's the system response time today?" This provides a good argument to upgrade the platform to one that can support enough memory & dedicated i/o processors to handle the 'user load.'
Multi-programming is common today on most platforms. Since Windows95 PCs have been able to run more than one program at a time. Before that, a PC user could have several programs 'up' at the same time, but only one at a time would actually be running. Now, we're used to having a few windows open and seeing that they're all running pretty much OK all the time.Multi-user system have to do this multi-programming in spades. They may have to keep track of thousands of users' processes and give the illusion that each user has access to the whole system at the same time.The text does a good job of explaining how The Dispatcher, Control Blocks, and Interrupts work together to support multi-processing.
Time-Sharing platforms are geared to providing the shortest possible 'response time' to the largest number of users. They use techniques like roll-in/roll-out, time-slicing, or polling to divide CPU attention among the users. Minicomputer and mainframe platforms with their multi-channels and dedicated i/o processors excel at time-sharing applications and may provide thousands of users with subsecond response time as they work on the system.
Spooling is also common on most platforms today. The most visible application is the 'print spooler' that is provided on most OSs. When we print a Word document in windows, it doesn't 'go directly to the printer', but is first written to disk & then copied to the printer. The windows spooler is manifested in the Printers window, where a list of 'spooled jobs' will collect if the target printer is off-line because of a paper jam or other problem. Large, multi-user platforms may have to spool hundreds or thousands of print jobs among dozens or hundreds of printers. They need to have 'industrial strength' interfaces to the print spooler so they can solve problems by redirecting print jobs, or perhaps canceling one that goes wild. (I can relate that many performance problems and system failures have been caused by print jobs that 'run wild' and consume too much, or all the available disk space!)Spoolers also hold other types of data 'in limbo' on disk until they're accessed by their owners. In Linux, for example, /var/spool/mail holds users' email until they use an email client to move the email into their own 'folders'.
Deadlock is, hopefully, less and less common as platforms' performance increases and larger, faster machine become more affordable. When a system, or one of its components, is overwhelmed is might not be able to handle the user load. A disk that is too busy paging won't have time to retrieve programs or data needed by users -- this might be avoided by designating a separate disk for paging. Multi-user OSs are programmed to avoid common causes of deadlock, such as more than one process trying to access the same disk drive at the same time.Recently, I've heard the term used to describe what happens when too many users try to use an Access Database (designed for single user access) at the same time. The solution here is to get a _real_ DBMS like SQL Server and stop trying to abuse Access.Deadlock can sometimes be solved by 'throwing hardware' at the problem, providing adequate resources for the users.
Network Operating Systems (NOS) include device drivers for network hardware, like NICs (Network Interface Card), and they include software to send and receive data using various network protocols. NOSs also include functions to 'authenticate' users & other machines, allow them to access system resources for which they are 'authorized', and deny access to unauthorized users.Before discussing NOS, here are a few paragraphs about their precursors, 'terminal emulators'. Many networks today use terminal emulators to get access to host computers and/or servers made available to users of the network. Before LANs and NOSs came on the scene, host computers, mainframes and minicomputers, had for decades been about the business of authenticating & authorizing users and allowing them to access application software that ran on the host to let them access data stored on the host's devices.As soon as PCs starting sitting on desktops there was a need to connect them to some other computer. In the early '80s I recollect lots of PCs taking up space on desks along with a 'terminal', or 'dumb terminal' used to get into some host mainframe or minicomputer. It didn't take long before software came along to save some of the limited space on desks: 'terminal emulation software' and maybe an interface card would be added to a PC so that it could 'emulate', or 'behave like' the dumb terminal it would replace. For example, in the School of Business, IRMA software and an interface card with a co-axial connector were added to PCs and the IBM 3270 terminal could be taken away. This way the PC could do all the PC stuff, like word-processing & spreadsheeting, and could also be used as a terminal on a host computer.To attach to one of the many different 'minicomputers' of the time was even easier. Most of these used a somewhat standard 'serial interface', the typical PC had two 'serial ports' on its backside, and all that was required to hook the PC up to a host minicomputer was terminal emulation software that would use the serial port for I/O with the minicomputer. Minicomputer manufacturers and third-part software houses provided terminal emulation software that allowed a PC to replace the terminals required. This allowed a single PC to attach to one, or a few, 'host computers'. The host at the center of this star topology might, or might not, have functions to let the PCs share files or other resources. Many of them did, and for most intents and purposes, offices rigged this way had 'a network' of PCs with a minicomputer at the center.