Search Results

Agent-Based Architecture for Web Deployment of Multi-Agents as Conversational Interfaces.
Agent-based architecture explains the rationale and basis for developing agents that can interact with users through natural language query/answer patterns developed systematically using AIML (artificial intelligence mark-up language) scripts. This thesis research document also explains the architecture for VISTA (virtual interactive story-telling agents), which is used for interactive querying in educational and recreational purposes. Agents are very effective as conversational interfaces when used along side with graphical user interface (GUI) in applications and Web pages. This architecture platform can support multiple agents with or with out sharing of knowledgebase. They are very useful as chat robots for recreational purposes, customer service and educational purposes. This platform is powered by Java servlet implementation of Program D and contained in Apache Tomcat server. The AIML scripting language defined here in is a generic form of XML language and forms the knowledgebase of the bot. Animation is provided with Microsoft® Agent technology and text-to-speech support engine.
Automatic Software Test Data Generation
In software testing, it is often desirable to find test inputs that exercise specific program features. Finding these inputs manually, is extremely time consuming, especially, when the software being tested is complex. Therefore, there have been numerous attempts automate this process. Random test data generation consists of generating test inputs at random, in the hope that they will exercise the desired software features. Often the desired inputs must satisfy complex constraints, and this makes a random approach seem unlikely to succeed. In contrast, combinatorial optimization techniques, such as those using genetic algorithms, are meant to solve difficult problems involving simultaneous satisfaction of many constraints.
Benchmark-based Page Replacement (BBPR) Strategy: A New Web Cache Page Replacement Strategy
World Wide Web caching is widely used through today's Internet. When correctly deployed, Web caching systems can lead to significant bandwidth savings, network load reduction, server load balancing, and higher content availability. A document replacement algorithm that can lower retrieval latency and yield high hit ratio is the key to the effectiveness of proxy caches. More than twenty cache algorithms have been employed in academic studies and in corporate communities as well. But there are some drawbacks in the existing replacement algorithms. To overcome these shortcomings, we developed a new page replacement strategy named as Benchmark-Based Page Replacement (BBPR) strategy, in which a HTTP benchmark is used as a tool to evaluate the current network load and the server load. By our simulation model, the BBPR strategy shows better performance than the LRU (Least Recently Used) method, which is the most commonly used algorithm. The tradeoff is a reduced hit ratio. Slow pages benefit from BBPR.
Case-Based Reasoning for Children Story Selection in ASP.NET
This paper describes the general architecture and function of a Case-Based Reasoning (CBR) system implemented with ASP.NET and C#. Microsoft Visual Studio .NET and XML Web Services provide a flexible, standards-based model that allows clients to access data. Web Form Pages offer a powerful programming model for Web-enabled user interface. The system provides a variety of mechanisms and services related to story retrieval and adaptation. Users may browse and search a library of text stories. More advanced CBR capabilities were also implemented, including a multi-factor distance-calculation for matching user interests with stories in the library, recommendations on optimizing search, and adaptation of stories to match user interests.
A Comparison of Meansort and Quicksort
The main purpose of this project is to compare a new sorting method- Meansort with its preceding sorting method- Quicksort. Meansort uses the mean value for each key to determine the partition of the file, but Quicksort selects at random. Experiments proved that in some ways Meansort is superior to Quicksort but is still not perfect since it always needs a mean value for each key. This project implements these two methods and determines the situations under which each of these methods outperforms the other.
Content-Based Image Retrieval by Integration of Metadata Encoded Multimedia Features in Constructing a Video Summarizer Application.
Content-based image retrieval (CBIR) is the retrieval of images from a collection by means of internal feature measures of the information content of the images. In CBIR systems, text media is usually used only to retrieve exemplar images for further searching by image feature content. This research work describes a new method for integrating multimedia text and image content features to increase the retrieval performance of the system. I am exploring the content-based features of an image extracted from a video to build a storyboard for search retrieval of images. Metadata encoded multimedia features include extracting primitive features like color, shape and text from an image. Histograms are built for all the features extracted and stored in a database. Images are searched based on comparing these histogram values of the extracted image with the stored values. These histogram values are used for extraction of keyframes from a collection of images parsed from a video file. Individual shots of images are extracted from a video clip and run through processes that extract the features and build the histogram values. A keyframe extraction algorithm is run to get the keyframes from the collection of images to build a storyboard of images. In video retrieval, speech recognition and other multimedia encoding could help improve the CBIR indexing technique and makes keyframe extraction and searching effective. Research in area of embedding sound and other multimedia could enhance effective video retrieval.
Control Mechanisms and Recovery Techniques for Real-Time Data Transmission Over the Internet.
Streaming multimedia content with UDP has become popular over distributed systems such as an Internet. This may encounter many losses due to dropped packets or late arrivals at destination since UDP can only provide best effort delivery. Even UDP doesn't have any self-recovery mechanism from congestion collapse or bursty loss to inform sender of the data to adjust future transmission rate of data like in TCP. So there is a need to incorporate various control schemes like forward error control, interleaving, and congestion control and error concealment into real-time transmission to prevent from effect of losses. Loss can be repaired by retransmission if roundtrip delay is allowed, otherwise error concealment techniques will be used based on the type and amount of loss. This paper implements the interleaving technique with packet spacing of varying interleaver block size for protecting real-time data from loss and its effect during transformation across the Internet. The packets are interleaved and maintain some time gap between two consecutive packets before being transmitted into the Internet. Thus loss of packets can be reduced from congestion and preventing loss of consecutive packets of information when a burst of several packets are lost. Several experiments have been conducted with video data for analysis of proposed model.
The Data Structure of a KSAM Key Directory
The purpose of this project is to explore the alternate data structures for a disk file which is currently a preorder binary tree. specifically, the file is the key directory for an implementation of Keyed Sequential Access Method (KSAM) in a mini-computer operating system. A new data structure will be chosen, with the reasons for that choice given, and it will be incorporated into the existing system.
Design and Implementation of a Text Editor Under Music Interactive Operating System
An interactive text editor is a computer program that allows a user to create and revise a target document such as program statements, manuscript text, and numeric data through an online terminal and the computer. It allows text to be modified and corrected many orders of magnitude faster and more easily than would manual correction. The most important characteristic of the text editor is its convenience for the user. Such convenience requires a simple, mnemonic command language which is easy to use and understand.
Developing a Test Bed for Interactive Narrative in Virtual Environments
As Virtual Environments (VE) become a more commonly used method of interaction and presentation, supporting users as they navigate and interact with scenarios presented in VE will be a significant issue. A key step in understanding the needs of users in these situations will be observing them perform representative tasks in a fully developed environment. In this paper, we describe the development of a test bed for interactive narrative in a virtual environment. The test bed was specifically developed to present multiple, simultaneous sequences of events (scenarios or narratives) and to support user navigation through these scenarios. These capabilities will support the development of multiple users testing scenarios, allowing us to study and better understand the needs of users of narrative VEs.
DICOM Image Scrubbing Software Library/Utility
This software is aimed at providing user-friendly, easy-to-use environment for the user to scrub (de-identify/modify) the DICOM header information. Some tools either anonymize or default the values without the user interaction. The user doesn't have the flexibility to edit the header information. One cannot scrub set of images simultaneously (batch scrubbing). This motivated to develop a tool/utility that can scrub a set of images in a single step more efficiently. This document also addresses security issues of the patient confidentiality to achieve protection of patient identifying information and some technical requirements
Ensuring Authenticity and Integrity of Critical Information Using XML Digital Signatures
It has been noticed in the past five years that the Internet use has been troubled by the lack of sufficient security and a legal framework to enable electronic commerce to flourish. Despite these shortcomings, governments, businesses and individuals are using the Internet more often as an inexpensive and ubiquitous means to disseminate and obtain information, goods and services. The Internet is insecure -- potentially millions of people have access, and "hackers" can intercept anything traveling over the wire. There is no way to make it a secure environment; it is, after all, a public network, hence the availability and affordability. In order for it to serve our purposes as a vehicle for legally binding transactions, efforts must be directed at securing the message itself, as opposed to the transport mechanism. Digital signatures have been evolved in the recent years as the best tool for ensuring the authenticity and integrity of critical information in the so called "paperless office". A model using XML digital signatures is developed and the level of security provided by this model in the real world scenario is outlined.
Evaluation of MPLS Enabled Networks
Recent developments in the Internet have inspired a wide range of business and consumer applications. The deployment of multimedia-based services has driven the demand for increased and guaranteed bandwidth requirements over the network. The diverse requirements of the wide range of users demand differentiated classes of service and quality assurance. The new technology of Multi-protocol label switching (MPLS) has emerged as a high performance and reliable option to address these challenges apart from the additional features that were not addressed before. This problem in lieu of thesis describes how the new paradigm of MPLS is advantageous over the conventional architecture. The motivation for this paradigm is discussed in the first part, followed by a detailed description of this new architecture. The information flow, the underlying protocols and the MPLS extensions to some of the traditional protocols are then discussed followed by the description of the simulation. The simulation results are used to show the advantages of the proposed technology.
The Feasibility of Multicasting in RMI
Due to the growing need of the Internet and networking technologies, simple, powerful, easily maintained distributed applications needed to be developed. These kinds of applications can benefit greatly from distributed computing concepts. Despite its powerful mechanisms, Jini has yet to be accepted in mainstream Java development. Until that happens, we need to find better Remote Method Invocation (RMI) solutions. Feasibility of implementation of Multicasting in RMI is worked in this paper. Multicasting capability can be used in RMI using Jini-like technique. Support of Multicast over Unicast reference layer is also studied. A piece of code explaining how it can be done, is added.
Field Programmable Devices and Reconfigurable Computing
The motivation behind this research has been the idea of the capability of the computing device to dynamically reconfigure itself. The goal of this work is to measure the computational power of reconfigurable machines rather in an abstract manner by proposing a model the FPGAs abstract computing machines. Modeling FPGAs in terms of Automata Theory would give a base to answer more fundamental questions about the capabilities and possible answers. If a Finite State Machine (FSM) or a Turing Machine (TM) has the capability of reconfiguring its finite control, does this ability give the abstract computing device new computational power. In other words is a reconfigurable FSM, TM or a Cellular Automata more powerful than their corresponding non-configurable versions?
FORTRAN Graphics Library
The objective of this work is to help the faculty, staffs and students of NTSU to use the CalComp plotting facility very easily. Therefore, this work is written in such a step by step and self-explanatory way to help the reader to understand and grasp the essential technique of the computer plotting. Each subroutine illustrated in this work has been run and checked by our NTSU computer-CalComp plotting facility; the results of sample programs and illustrated graphs are believed to be very useful to understand each individual subroutine. Basically, software packages are stored in the magnetic disk of the IBM 360 computer as the standard graphic subroutines. These subroutines were written in FORTRAN IV. The user can write the driving program to call these subroutines and also inputs the desire data to the computer for computation. The results of computation will be outputed and stored in the magnetic tape.
Implementation of Back Up Host in TCP/IP
This problem in lieu thesis is considering a TCP client H1 making a connection to distant server S and is downloading a file. In the midst of the downloading, if H1 crashes, the TCP connection from H1 to S is lost. In the future, if H1 restarts, the TCP connection from H1 to S will be reestablished and the file will be downloaded again. This cannot happen until host H1 restarts. Now consider a situation where there is a standby host H2 for the host H1. H1 and H2 monitor the health of each other by heartbeat messages (like SCTP). If H2 detects the failure of H1, then H2 takes over. This implies that all resources assigned to H1 are now reassigned or taken over by H2. The host H1 and H2 transmit data between each other when any one of it crashed. Throughout the data transmission process, heart beat chunk is exchanged between the hosts when one of the host crashes. In particular, the IP addresses that were originally assigned to H1 are assigned to H2. In this scenario, movement of the TCP connection between H1 and S to a connection between H2 and S without disrupting the TCP connection is achieved. Distant server S is not aware of any changes going on at the clients.
Implementation of Scalable Secure Multicasting
A large number of applications like multi-player games, video conferencing, chat groups and network management are presently based on multicast communication. As the group communication model is being deployed for mainstream use, it is critical to provide security mechanisms that facilitate confidentiality, authenticity and integrity in group communications. Providing security in multicast communication requires addressing the problem of scalability in group key distribution. Scalability is a concern in group communication due to group membership dynamics. Joining and leaving of members requires the distribution of a new session key to all the existing members of the group. The two approaches to key management namely centralized and distributed approaches are reviewed. A hybrid solution is then provided, which represents a improved scalable and robust approach for a secure multicast framework. This framework then is implemented in an example application of a multicast news service.
Machine Language Techniques for Conversational Agents
Machine Learning is the ability of a machine to perform better at a given task, using its previous experience. Various algorithms like decision trees, Bayesian learning, artificial neural networks and instance-based learning algorithms are used widely in machine learning systems. Current applications of machine learning include credit card fraud detection, customer service based on history of purchased products, games and many more. The application of machine learning techniques to natural language processing (NLP) has increased tremendously in recent years. Examples are handwriting recognition and speech recognition. The problem we tackle in this Problem in Lieu of Thesis is applying machine-learning techniques to improve the performance of a conversational agent. The OpenMind repository of common sense, in the form of question-answer pairs is treated as the training data for the machine learning system. WordNet is interfaced with to capture important semantic and syntactic information about the words in the sentences. Further, k-closest neighbors algorithm, an instance based learning algorithm is used to simulate a case based learning system. The resulting system is expected to be able to answer new queries with knowledge gained from the training data it was fed with.
Machine Recognition of Hand-Send Morse Code Using the M6800 Microcomputer
This research is the result of an effort to provide real-time machine recognition of hand-send Morse code through the use of the M6800 microcomputer. While the capability to recognize hand-send Morse code messages by machine has been demonstrated before on large scale special purpose computers, on minicomputers, and even on the M6800 microcomputer, the main contribution of this paper is to demonstrate it with relatively understandable hardware and software.
Macro - Preprocessor for 6809 Cross Assembler
It is frequently considered to be apparent two stages during assembly time. The first is the preprocessor stage in which a single instruction called the micro instruction is replaced with the sequence of instructions called the macro definition. The second is the processor stage in which the output from the first stage is assembled into machine language instructions for a particular computer. This paper descibes the first one which is macro-preprocessor stage.
Merlin Classifier System
There is a natural tendency for biological systems to change as their environments change. The fittest in the biological systems survive, adapt to their environment, and multiply while the weakest in the environment diminish. There have been attempts in computer science to model the processes of natural selection and survival which occur in biological systems in order to obtain more efficient and effective machine-learning algorithms. Genetic algorithms are the result of these attempts.
Mini-ADA Compiler Project
The Ada language is one of the most controversial topics in computer science today. Ada was originally designed as a solution to the software maintenance problems encountered by the United States Department of Defense[2], and as a multi-purpose language to be used particularly in an embedded computer system[7]. Never before has a project been undertaken. The Ada language does not simply entail the construction of a new compiler or a new language definition, it is this and a great deal more.
Multiple Window Editor
This paper is written to present the design purpose and design process of the Multiple Window Editor. Multiple Window Editor is a software which allows the user to edit or view different files or the same file on the screen by the window facilities provided by this software. All the windows can be dynamically created, changed, moved, and destroyed. The main purpose of this program is to improve the programming environment for the users. The design motivations will be introduced through the comparison of the present existing window facilities and the editor components. The design process will be introduced by analyzing the design decision, design tradeoffs and implementation problems.
OLAP Services
On-line Analytical Processing (OLAP) is a very interesting platform to provide analytical power to the data present in the database. This paper discusses the system designed which handles integration of data from two remote legacy reservation systems to merge as one Integrated database server and also the design of an OLAP database and building an OLAP cube for the data warehousing. OLAP cube is useful for analysis of data and also for making various business decisions. The Data Transformation Services (DTS) in the Microsoft® SQL Server 2000 is used to integrate as a package the collection of data and also for refreshing data in the databases. On-line Analytical Processing (OLAP) cube is designed using Microsoft® Analysis Server.
Performance Evaluation of MPLS on Quality of Service in Voice Over IP (VoIP) Networks
The transmission of voice data over Internet Protocol (IP) networks is rapidly gaining acceptance in the field of networking. The major voice transmissions in the IP networks are involved in Internet telephony, which is also known as IP telephony or Voice Over IP (VoIP). VoIP is undergoing many enhancements to provide the end users with same quality as in the public switched telephone networks (PSTN). These enhancements are mostly required in quality of service (QoS) for the transmission of voice data over the IP networks. As with recent developments in the networking field, various protocols came into market to provide the QoS in IP networks - of them, multi protocol label switching (MPLS) is the most reliable and upcoming protocol for working on QoS. The problem of the thesis is to develop an IP-based virtual network, with end hosts and routers, implement MPLS on the network, and analyze its QoS for voice data transmission.
PILOT for the Apple II Microcomputer
PILOT (Programmed Inquiry, Learning or Teaching) is a simple, conversational language developed in 1969 by John A. Starkweather at the University of California Medical Center in San Francisco. Originally designed for computer assisted instructional needs, PILOT also has been effectively used as an introductory computer language. The PILOT system developed for the Apple II microcomputer consists of two programs, PILOT EDITOR and PILOT DRIVER, which are written in Applesoft and which use the Apple II disk operating system. The PILOT system was designed to facilitate easy authoring and execution of programs written in an extended version of the PILOT language. Due to the memory requirements of the programs and the Apple II disk operating system, the PILOT system described here should be executed on a machine with at least 32k bytes of random access memory.
The Principles of Relational Databases
Every business has to keep records. Sometimes these records have to be presented in a standardized form, or more often they can be arranged in any way that suits the user. Business records are of little use unless they can be referred to quickly, to provide information when it is required. In computer systems it is essential to be able to recognize any particular record in a data file which is a collection of similar records kept on secondary computer storage devices.
A Quality of Service Aware Protocol for Power Conservation in Wireless Ad Hoc and Mobile Networks
Power consumption is an important issue for mobile computers since they rely on the short life of batteries. Conservation techniques are commonly used in hardware design of such systems but network interface is a significant consumer of power, which needs considerable research to be devoted towards designing a low-power design network protocol stack. Due to the dynamic nature of wireless networks, adaptations are necessary to achieve energy efficiency and a reasonable quality of service. This paper presents the application of energy-efficient techniques to each layer in the network protocol stack and a feedback is provided depending on the performance of this new design. And also a comparison of two existing MAC protocols is done showing a better suitability of E2MAC for higher power conservation. Multimedia applications can achieve an optimal performance if they are aware of the characteristics of the wireless link. Relying on the underlying operating system software and communication protocols to hide the anomalies of wireless channel needs efficient energy consumption methodology and fair quality of service like E2MAC. This report also focuses on some of the various concerns of energy efficiency in wireless communication and also looks into the definition of seven layers as defined by International Standards Organization.
Refactoring FrameNet for Efficient Relational Queries
The FrameNet database is being used in a variety of NLP research and applications such as word sense disambiguation, machine translation, information extraction and question answering. The database is currently available in XML format. The XML database though a wholesome way of distributing data in its entireness, is not practical for use unless converted to a more application friendly database. In light of this we have successfully converted the XML database to a relational MySQL™ database. This conversion reduced the amount of data storage amount to less than half. Most importantly the new database enables us to perform fast complex querying and facilitates use by applications and research. We show the steps taken to ensure relational integrity of the data during the refactoring process and a simple demo application demonstrating ease of use.
A Report on Control of Access to Stored Information in a Computer Utility
Time-sharing computer systems permit large numbers of users to operate on common sets of data and programs. Since certain parts of these computer resources may be sensitive or proprietary, there exists the risks that information belonging to one user, may, contrary to his intent, become available to other users, and there is the additional risk that outside agencies may infiltrate the system and obtain information. The question naturally arises of protecting one user's stored program and data against unauthorized access by others.
Secret Key Agreement without Public-Key Cryptography
Secure communication is the primary challenge in today's information network. In this project an efficient secret key agreement protocol is described and analyzed along with the other existing protocols. We focus primarily on Leighton and Micali's secret-key agreement without the use of public-key encryption techniques. The Leighton-Micali protocol is extremely efficient when implemented in software and has significant advantages over existing systems like Kerberos. In this method the secret keys are agreed upon using a trusted third party known as the trusted agent. The trusted agent generates the keys and writes them to a public directory before it goes offline. The communicating entities can retrieve the keys either from the online trusted agent or from the public directory service and agree upon a symmetric-key without any public-key procedures. The principal advantage of this method is that the user verifies the authenticity of the trusted agent before using the keys generated by it. The Leighton-Micali scheme is not vulnerable to the present day attacks like fabrication, modification or denial of service etc. The Leighton-Micali protocol can be employed in real-time systems like smart cards. In addition to the security properties and the simplicity of the protocol, our experiments show that in practice the time to generate keys is very low, and is faster than the Diffie-Hellman key exchange for the same problem.
Security problems in 802.11 wireless networks standard due to the inefficiency of wired equivalent privacy protocol.
Due to the rapid growth of wireless networking, the fallible security issues of the 802.11 standard have come under close scrutiny. Nowadays most of the organizations are eager to setup wireless local area networks to reduce the hassles of limited mobility provided by conventional wired network. There are serious security issues that need to be sorted out before everyone is willing to transmit valuable corporate information on a wireless network. This report documents the inherent flaws in wired equivalent privacy protocol used by the 802.11 standard and the ensuing security breaches that can occur to a wireless network due to these flaws. The solutions suggested in this report might not actually make the 802.11 standard secure, but will surely help in the lead up to a secure wireless network standard.
Self-Optimizing Dynamic Finite Functions
Finite functions (also called maps) are used to describe a number of key computations and storage mechanisms used in software and hardware interpreters. Their presence spread over various memory and speed hierarchies in hardware and through various optimization processes (algorithmic and compilation based) in software, suggests encapsulating dynamic size changes and representation optimizations in a unique abstraction to be used across traditional computation mechanisms. We developed a memory allocator for testing the finite functions. We have implemented some dynamic finite functions and performed certain experiments to see the performance speed of these finite functions. We have developed some simple but powerful application programming interfaces (API) for these finite functions.
Server load balancing.
Server load balancing technology has obtained much attention as much business proceeded towards e-commerce. The idea behind is to have set of clustered servers that share the load as against a single server to achieve better performance and throughput. In this problem in lieu of thesis, I propose and evaluate an implementation of a prototype scalable server. The prototype consists of a load-balanced cluster of hosts that collectively accept and service TCP connections. The host IP addresses are advertised using the Round Robin DNS technique, allowing any host to receive requests from any client. Once a client attempts to establish a TCP connection with one of the hosts, a decision is made as to whether or not the connection should be redirected to a different host namely, the host with the lowest number of established connections. This problem in lieu of thesis outlines the history of load balancing, various options available today and finally approach for implementing the prototype and the corresponding findings.
Study and Sample Implementation of the Secure Shell Protocol (SSH)
Security is one of the main concerns of users who need to connect to a remote computer for various purposes, such as checking e-mails or viewing files. However in today's computer networks, privacy, transmission to intended client is not guaranteed. If data is transmitted over the Internet or a local network as plain text it may be captured and viewed by anyone with little technical knowledge. This may include sensitive data such as passwords. Big businesses use firewalls, virtual private networks and encrypt their transmissions to counter this at high costs. Secure shell protocol (SSH) provides an answer to this. SSH is a software protocol for secure communication over an insecure network. SSH not only offers authentication of hosts but also encrypts the sessions between the client and the server and is transparent to the end user. This Problem in Lieu of Thesis makes a study of SSH and creates a sample secure client and server which follows SSH and examines its performance.
A Survey of Computer Systems: IBM System/360, 3031, The Decsystem-20, The Univac 1100, and The Cray-1, and The AS/5000
This is a brief survey of some of the popular computer systems. As many features as possible have been covered in order to get an overview of the systems under consideration.
Text Processing for Thai Characters
The purpose of this project is 1) to create a Thai character set for text processing, 2) to write a text processing program for the character set, and 3) to allow users to create and save the text.
Triangle: A Teaching Program of High School Geometry
Among the early applications of computers, one can find frequent mention of intelligent instructional systems. Such intelligent instructional systems represent a new generation of learner-based computer aided instruction, preceded in time by the original frame-based systems and an intervening generation of expert-based CAI. The history of CAI is characterized by three generations: Frame-based CAI, Expert-based CAI and Learner-based CAI.
User Modeling Tools for Virtual Architecture
As the use of virtual environments (VEs) is becoming more widespread, user needs are becoming a more significant part in those environments. In order to adapt to the needs of the user, a system should be able to infer user interests and goals. I developed an architecture for user modeling to understand users' interests in a VE by monitoring their actions. In this paper, I discussed the architecture and the virtual environment that was created to test it. This architecture employs sensors to keep track of all the users' actions, data structures that can store a record of significant events that have occurred in the environment, and a rule base. The rule base continually monitors the data collected from the sensors, world state, and event history in order to update the user goal inferences. These inferences can then be used to modify the flow of events within a VE.
Web Services for Libraries
Library information systems use different software applications and automated systems to gain access to distributed information. Rapid application development, changes made to existing software applications and development of new software on different platforms can make it difficult for library information systems to interoperate. Web services are used to offer better information access and retrieval solutions and hence make it more cost effective for libraries. This research focuses on how web services are implemented with the standard protocols like SOAP, WSDL and UDDI using different programming languages and platforms to achieve interoperability for libraries. It also shows how libraries can make use of this new technology. Since web services built on different platforms can interact with each other, libraries can access information with more efficiency and flexibility.
Back to Top of Screen