UNT Libraries - 236 Matching Results

Search Results

A Left-to-Right Parsing Algorithm for THIS Programming Language

Description: The subject of this investigation is a specific set of parsers known as LR parsers. Of primary interest is a LR parsing method developed by DeRemer which specifies a translation method which can be defined by a Deterministic Push-Down Automation (DPDA). The method of investigation was to apply DeRemer's parsing technique to a specific language known as THIS Programming Language (TPL). The syntax of TPL was redefined as state diagrams and these state diagrams were, in turn, encoded into two tables--a State-Action table and a Transition table. The tables were then incorporated into a PL/l adaptation of DeRemer's algorithm and tested against various TPL statements.
Date: May 1976
Creator: Hooker, David P.

Linearly Ordered Concurrent Data Structures on Hypercubes

Description: This thesis presents a simple method for the concurrent manipulation of linearly ordered data structures on hypercubes. The method is based on the existence of a pruned binomial search tree rooted at any arbitrary node of the binary hypercube. The tree spans any arbitrary sequence of n consecutive nodes containing the root, using a fan-out of at most [log₂ 𝑛] and a depth of [log₂ 𝑛] +1. Search trees spanning non-overlapping processor lists are formed using only local information, and can be used concurrently without contention problems. Thus, they can be used for performing broadcast and merge operations simultaneously on sets with non-uniform sizes. Extensions to generalized and faulty hypercubes and applications to image processing algorithms and for m-way search are discussed.
Date: August 1992
Creator: John, Ajita

Logic Programming Tools for Dynamic Content Generation and Internet Data Mining

Description: The phenomenal growth of Information Technology requires us to elicit, store and maintain huge volumes of data. Analyzing this data for various purposes is becoming increasingly important. Data mining consists of applying data analysis and discovery algorithms that under acceptable computational efficiency limitations, produce a particular enumeration of patterns over the data. We present two techniques based on using Logic programming tools for data mining. Data mining analyzes data by extracting patterns which describe its structure and discovers co-relations in the form of rules. We distinguish analysis methods as visual and non-visual and present one application of each. We explain that our focus on the field of Logic Programming makes some of the very complex tasks related to Web based data mining and dynamic content generation, simple and easy to implement in a uniform framework.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2000
Creator: Gupta, Anima

Machine Language Techniques for Conversational Agents

Description: Machine Learning is the ability of a machine to perform better at a given task, using its previous experience. Various algorithms like decision trees, Bayesian learning, artificial neural networks and instance-based learning algorithms are used widely in machine learning systems. Current applications of machine learning include credit card fraud detection, customer service based on history of purchased products, games and many more. The application of machine learning techniques to natural language processing (NLP) has increased tremendously in recent years. Examples are handwriting recognition and speech recognition. The problem we tackle in this Problem in Lieu of Thesis is applying machine-learning techniques to improve the performance of a conversational agent. The OpenMind repository of common sense, in the form of question-answer pairs is treated as the training data for the machine learning system. WordNet is interfaced with to capture important semantic and syntactic information about the words in the sentences. Further, k-closest neighbors algorithm, an instance based learning algorithm is used to simulate a case based learning system. The resulting system is expected to be able to answer new queries with knowledge gained from the training data it was fed with.
Date: December 2003
Creator: Sule, Manisha D.

A Machine Learning Method Suitable for Dynamic Domains

Description: The efficacy of a machine learning technique is domain dependent. Some machine learning techniques work very well for certain domains but are ill-suited for other domains. One area that is of real-world concern is the flexibility with which machine learning techniques can adapt to dynamic domains. Currently, there are no known reports of any system that can learn dynamic domains, short of starting over (i.e., re-running the program). Starting over is neither time nor cost efficient for real-world production environments. This dissertation studied a method, referred to as Experience Based Learning (EBL), that attempts to deal with conditions related to learning dynamic domains. EBL is an extension of Instance Based Learning methods. The hypothesis of the study related to this research was that the EBL method would automatically adjust to domain changes and still provide classification accuracy similar to methods that require starting over. To test this hypothesis, twelve widely studied machine learning datasets were used. A dynamic domain was simulated by presenting these datasets in an uninterrupted cycle of train, test, and retrain. The order of the twelve datasets and the order of records within each dataset were randomized to control for order biases in each of ten runs. As a result, these methods provided datasets that represent extreme levels of domain change. Using the above datasets, EBL's mean classification accuracies for each dataset were compared to the published static domain results of other machine learning systems. The results indicated that the EBL's system performance was not statistically different (p>0.30) from the other machine learning methods. These results indicate that the EBL system is able to adjust to an extreme level of domain change and yet produce satisfactory results. This finding supports the use of the EBL method in real-world environments that incur rapid changes to both variables and ...
Date: July 1996
Creator: Rowe, Michael C. (Michael Charles)

Maintaining Web Applications Integrity Running on RADIUM

Description: Computer security attacks take place due to the presence of vulnerabilities and bugs in software applications. Bugs and vulnerabilities are the result of weak software architecture and lack of standard software development practices. Despite the fact that software companies are investing millions of dollars in the research and development of software designs security risks are still at large. In some cases software applications are found to carry vulnerabilities for many years before being identified. A recent such example is the popular Heart Bleed Bug in the Open SSL/TSL. In today’s world, where new software application are continuously being developed for a varied community of users; it’s highly unlikely to have software applications running without flaws. Attackers on computer system securities exploit these vulnerabilities and bugs and cause threat to privacy without leaving any trace. The most critical vulnerabilities are those which are related to the integrity of the software applications. Because integrity is directly linked to the credibility of software application and data it contains. Here I am giving solution of maintaining web applications integrity running on RADIUM by using daikon. Daikon generates invariants, these invariants are used to maintain the integrity of the web application and also check the correct behavior of web application at run time on RADIUM architecture in case of any attack or malware. I used data invariants and program flow invariants in my solution to maintain the integrity of web-application against such attack or malware. I check the behavior of my proposed invariants at run-time using Lib-VMI/Volatility memory introspection tool. This is a novel approach and proof of concept toward maintaining web application integrity on RADIUM.
Date: August 2015
Creator: Ur-Rehman, Wasi

Measuring Semantic Relatedness Using Salient Encyclopedic Concepts

Description: While pragmatics, through its integration of situational awareness and real world relevant knowledge, offers a high level of analysis that is suitable for real interpretation of natural dialogue, semantics, on the other end, represents a lower yet more tractable and affordable linguistic level of analysis using current technologies. Generally, the understanding of semantic meaning in literature has revolved around the famous quote ``You shall know a word by the company it keeps''. In this thesis we investigate the role of context constituents in decoding the semantic meaning of the engulfing context; specifically we probe the role of salient concepts, defined as content-bearing expressions which afford encyclopedic definitions, as a suitable source of semantic clues to an unambiguous interpretation of context. Furthermore, we integrate this world knowledge in building a new and robust unsupervised semantic model and apply it to entail semantic relatedness between textual pairs, whether they are words, sentences or paragraphs. Moreover, we explore the abstraction of semantics across languages and utilize our findings into building a novel multi-lingual semantic relatedness model exploiting information acquired from various languages. We demonstrate the effectiveness and the superiority of our mono-lingual and multi-lingual models through a comprehensive set of evaluations on specialized synthetic datasets for semantic relatedness as well as real world applications such as paraphrase detection and short answer grading. Our work represents a novel approach to integrate world-knowledge into current semantic models and a means to cross the language boundary for a better and more robust semantic relatedness representation, thus opening the door for an improved abstraction of meaning that carries the potential of ultimately imparting understanding of natural language to machines.
Date: August 2011
Creator: Hassan, Samer

Measuring Vital Signs Using Smart Phones

Description: Smart phones today have become increasingly popular with the general public for its diverse abilities like navigation, social networking, and multimedia facilities to name a few. These phones are equipped with high end processors, high resolution cameras, built-in sensors like accelerometer, orientation-sensor, light-sensor, and much more. According to comScore survey, 25.3% of US adults use smart phones in their daily lives. Motivated by the capability of smart phones and their extensive usage, I focused on utilizing them for bio-medical applications. In this thesis, I present a new application for a smart phone to quantify the vital signs such as heart rate, respiratory rate and blood pressure with the help of its built-in sensors. Using the camera and a microphone, I have shown how the blood pressure and heart rate can be determined for a subject. People sometimes encounter minor situations like fainting or fatal accidents like car crash at unexpected times and places. It would be useful to have a device which can measure all vital signs in such an event. The second part of this thesis demonstrates a new mode of communication for next generation 9-1-1 calls. In this new architecture, the call-taker will be able to control the multimedia elements in the phone from a remote location. This would help the call-taker or first responder to have a better control over the situation. Transmission of the vital signs measured using the smart phone can be a life saver in critical situations. In today's voice oriented 9-1-1 calls, the dispatcher first collects critical information (e.g., location, call-back number) from caller, and assesses the situation. Meanwhile, the dispatchers constantly face a "60-second dilemma"; i.e., within 60 seconds, they need to make a complicated but important decision, whether to dispatch and, if so, what to dispatch. The dispatchers often feel that ...
Date: December 2010
Creator: Chandrasekaran, Vikram

A Mechanism for Facilitating Temporal Reasoning in Discrete Event Simulation

Description: This research establishes the feasibility and potential utility of a software mechanism which employs artificial intelligence techniques to enhance the capabilities of standard discrete event simulators. As background, current methods of integrating artificial intelligence with simulation and relevant research are briefly reviewed.
Date: May 1992
Creator: Legge, Gaynor W.

Mediation on XQuery Views

Description: The major goal of information integration is to provide efficient and easy-to-use access to multiple heterogeneous data sources with a single query. At the same time, one of the current trends is to use standard technologies for implementing solutions to complex software problems. In this dissertation, I used XML and XQuery as the standard technologies and have developed an extended projection algorithm to provide a solution to the information integration problem. In order to demonstrate my solution, I implemented a prototype mediation system called Omphalos based on XML related technologies. The dissertation describes the architecture of the system, its metadata, and the process it uses to answer queries. The system uses XQuery expressions (termed metaqueries) to capture complex mappings between global schemas and data source schemas. The system then applies these metaqueries in order to rewrite a user query on a virtual global database (representing the integrated view of the heterogeneous data sources) to a query (termed an outsourced query) on the real data sources. An extended XML document projection algorithm was developed to increase the efficiency of selecting the relevant subset of data from an individual data source to answer the user query. The system applies the projection algorithm to decompose an outsourced query into atomic queries which are each executed on a single data source. I also developed an algorithm to generate integrating queries, which the system uses to compose the answers from the atomic queries into a single answer to the original user query. I present a proof of both the extended XML document projection algorithm and the query integration algorithm. An analysis of the efficiency of the new extended algorithm is also presented. Finally I describe a collaborative schema-matching tool that was implemented to facilitate maintaining metadata.
Date: December 2006
Creator: Peng, Xiaobo

Memory Management and Garbage Collection Algorithms for Java-Based Prolog

Description: Implementing a Prolog Runtime System in a language like Java which provides its own automatic memory management and safety features such as built--in index checking and array initialization requires a consistent approach to memory management based on a simple ultimate goal: minimizing total memory management time and extra space involved. The total memory management time for Jinni is made up of garbage collection time both for Java and Jinni itself. Extra space is usually requested at Jinni's garbage collection. This goal motivates us to find a simple and practical garbage collection algorithm and implementation for our Prolog engine. In this thesis we survey various algorithms already proposed and offer our own contribution to the study of garbage collection by improvements and optimizations for some classic algorithms. We implemented these algorithms based on the dynamic array algorithm for an all--dynamic Prolog engine (JINNI 2000). The comparisons of our implementations versus the originally proposed algorithm allow us to draw informative conclusions on their theoretical complexity model and their empirical effectiveness.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: August 2001
Creator: Zhou, Qinan

A Minimally Supervised Word Sense Disambiguation Algorithm Using Syntactic Dependencies and Semantic Generalizations

Description: Natural language is inherently ambiguous. For example, the word "bank" can mean a financial institution or a river shore. Finding the correct meaning of a word in a particular context is a task known as word sense disambiguation (WSD), which is essential for many natural language processing applications such as machine translation, information retrieval, and others. While most current WSD methods try to disambiguate a small number of words for which enough annotated examples are available, the method proposed in this thesis attempts to address all words in unrestricted text. The method is based on constraints imposed by syntactic dependencies and concept generalizations drawn from an external dictionary. The method was tested on standard benchmarks as used during the SENSEVAL-2 and SENSEVAL-3 WSD international evaluation exercises, and was found to be competitive.
Date: December 2005
Creator: Faruque, Md. Ehsanul

Mobile agent security through multi-agent cryptographic protocols.

Description: An increasingly promising and widespread topic of research in distributed computing is the mobile agent paradigm: code travelling and performing computations on remote hosts in an autonomous manner. One of the biggest challenges faced by this new paradigm is security. The issue of protecting sensitive code and data carried by a mobile agent against tampering from a malicious host is particularly hard but important. Based on secure multi-party computation, a recent research direction shows the feasibility of a software-only solution to this problem, which had been deemed impossible by some researchers previously. The best result prior to this dissertation is a single-agent protocol which requires the participation of a trusted third party. Our research employs multi-agent protocols to eliminate the trusted third party, resulting in a protocol with minimum trust assumptions. This dissertation presents one of the first formal definitions of secure mobile agent computation, in which the privacy and integrity of the agent code and data as well as the data provided by the host are all protected. We present secure protocols for mobile agent computation against static, semi-honest or malicious adversaries without relying on any third party or trusting any specific participant in the system. The security of our protocols is formally proven through standard proof technique and according to our formal definition of security. Our second result is a more practical agent protocol with strong security against most real-world host attacks. The security features are carefully analyzed, and the practicality is demonstrated through implementation and experimental study on a real-world mobile agent platform. All these protocols rely heavily on well-established cryptographic primitives, such as encrypted circuits, threshold decryption, and oblivious transfer. Our study of these tools yields new contributions to the general field of cryptography. Particularly, we correct a well-known construction of the encrypted circuit and give ...
Date: May 2004
Creator: Xu, Ke

Mobile-Based Smart Auscultation

Description: In developing countries, acute respiratory infections (ARIs) are responsible for two million deaths per year. Most victims are children who are less than 5 years old. Pneumonia kills 5000 children per day. The statistics for cardiovascular diseases (CVDs) are even more alarming. According to a 2009 report from the World Health Organization (WHO), CVDs kill 17 million people per year. In many resource-poor parts of the world such as India and China, many people are unable to access cardiologists, pulmonologists, and other specialists. Hence, low skilled health professionals are responsible for screening people for ARIs and CVDs in these areas. For example, in the rural areas of the Philippines, there is only one doctor for every 10,000 people. By contrast, the United States has one doctor for every 500 Americans. Due to advances in technology, it is now possible to use a smartphone for audio recording, signal processing, and machine learning. In my thesis, I have developed an Android application named Smart Auscultation. Auscultation is a process in which physicians listen to heart and lung sounds to diagnose disorders. Cardiologists spend years mastering this skill. The Smart Auscultation application is capable of recording and classifying heart sounds, and can be used by public or clinical health workers. This application can detect abnormal heart sounds with up to 92-98% accuracy. In addition, the application can record, but not yet classify, lung sounds. This application will be able to help save thousands of lives by allowing anyone to identify abnormal heart and lung sounds.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: August 2017
Creator: Chitnis, Anurag Ashok

Modeling Alcohol Consumption Using Blog Data

Description: How do the content and writing style of people who drink alcohol beverages stand out from non-drinkers? How much information can we learn about a person's alcohol consumption behavior by reading text that they have authored? This thesis attempts to extend the methods deployed in authorship attribution and authorship profiling research into the domain of automatically identifying the human action of drinking alcohol beverages. I examine how a psycholinguistics dictionary (the Linguistics Inquiry and Word Count lexicon, developed by James Pennebaker), together with Kenneth Burke's concept of words as symbols of human action, and James Wertsch's concept of mediated action provide a framework for analyzing meaningful data patterns from the content of blogs written by consumers of alcohol beverages. The contributions of this thesis to the research field are twofold. First, I show that it is possible to automatically identify blog posts that have content related to the consumption of alcohol beverages. And second, I provide a framework and tools to model human behavior through text analysis of blog data.
Date: May 2013
Creator: Koh, Kok Chuan

Modeling and Analysis of Next Generation 9-1-1 Emergency Medical Dispatch Protocols

Description: Emergency Medical Dispatch Protocols are guidelines that a 9-1-1 dispatcher uses to evaluate the nature of emergency, resources to send and the nature of help provided to the 9-1-1 caller. The current Dispatch Protocols are based on voice only call. But the Next Generation 9-1-1 (NG9-1-1) architecture will allow multimedia emergency calls. In this thesis I analyze and model the Emergency Medical Dispatch Protocols for NG9-1-1 architecture. I have identified various technical aspects to improve the NG9-1-1 Dispatch Protocols. The devices (smartphone) at the caller end have advanced to a point where they can be used to send and receive video, pictures and text. There are sensors embedded in them that can be used for initial diagnosis of the injured person. There is a need to improve the human computer (smartphone) interface to take advantage of technology so that callers can easily make use of various features available to them. The dispatchers at the 9-1-1 call center can make use of these new protocols to improve the quality and the response time. They will have capability of multiple media streams to interact with the caller and the first responders.The specific contributions in this thesis include developing applications that use smartphone sensors. The CPR application uses the smartphone to help administer effective CPR even if the person is not trained. The application makes the CPR process closed loop, i.e., the person who administers the CPR as well as the 9-1-1 operator receive feedback and prompt from the application about the correctness of the CPR. The breathing application analyzes the quality of breathing of the affected person and automatically sends the information to the 9-1-1 operator. In order to improve the Human Computer Interface at the caller and the operator end, I have analyzed Fitts law and extended it so that it ...
Date: August 2013
Creator: Gupta, Neeraj Kant

Modeling and Simulation of the Vector-Borne Dengue Disease and the Effects of Regional Variation of Temperature in the Disease Prevalence in Homogenous and Heterogeneous Human Populations

Description: The history of mitigation programs to contain vector-borne diseases is a story of successes and failures. Due to the complex interplay among multiple factors that determine disease dynamics, the general principles for timely and specific intervention for incidence reduction or eradication of life-threatening diseases has yet to be determined. This research discusses computational methods developed to assist in the understanding of complex relationships affecting vector-borne disease dynamics. A computational framework to assist public health practitioners with exploring the dynamics of vector-borne diseases, such as malaria and dengue in homogenous and heterogeneous populations, has been conceived, designed, and implemented. The framework integrates a stochastic computational model of interactions to simulate horizontal disease transmission. The intent of the computational modeling has been the integration of stochasticity during simulation of the disease progression while reducing the number of necessary interactions to simulate a disease outbreak. While there are improvements in the computational time reducing the number of interactions needed for simulating disease dynamics, the realization of interactions can remain computationally expensive. Using multi-threading technology to improve performance upon the original computational model, multi-threading experimental results have been tested and reported. In addition, to the contact model, the modeling of biological processes specific to the corresponding pathogen-carrier vector to increase the specificity of the vector-borne disease has been integrated. Last, automation for requesting, retrieving, parsing, and storing specific weather data and geospatial information from federal agencies to study the differences between homogenous and heterogeneous populations has been implemented.
Date: August 2016
Creator: Bravo-Salgado, Angel D

Modeling Complex Forest Ecology in a Parallel Computing Infrastructure

Description: Effective stewardship of forest ecosystems make it imperative to measure, monitor, and predict the dynamic changes of forest ecology. Measuring and monitoring provides us a picture of a forest's current state and the necessary data to formulate models for prediction. However, societal and natural events alter the course of a forest's development. A simulation environment that takes into account these events will facilitate forest management. In this thesis, we describe an efficient parallel implementation of a land cover use model, Mosaic, and discuss the development efforts to incorporate spatial interaction and succession dynamics into the model. To evaluate the performance of our implementation, an extensive set of simulation experiments was carried out using a dataset representing the H.J. Andrews Forest in the Oregon Cascades. Results indicate that a significant reduction in the simulation execution time of our parallel model can be achieved as compared to uni-processor simulations.
Date: August 2003
Creator: Mayes, John

Modeling Infectious Disease Spread Using Global Stochastic Field Simulation

Description: Susceptibles-infectives-removals (SIR) and its derivatives are the classic mathematical models for the study of infectious diseases in epidemiology. In order to model and simulate epidemics of an infectious disease, a global stochastic field simulation paradigm (GSFS) is proposed, which incorporates geographic and demographic based interactions. The interaction measure between regions is a function of population density and geographical distance, and has been extended to include demographic and migratory constraints. The progression of diseases using GSFS is analyzed, and similar behavior to the SIR model is exhibited by GSFS, using the geographic information systems (GIS) gravity model for interactions. The limitations of the SIR and similar models of homogeneous population with uniform mixing are addressed by the GSFS model. The GSFS model is oriented to heterogeneous population, and can incorporate interactions based on geography, demography, environment and migration patterns. The progression of diseases can be modeled at higher levels of fidelity using the GSFS model, and facilitates optimal deployment of public health resources for prevention, control and surveillance of infectious diseases.
Date: August 2006
Creator: Venkatachalam, Sangeeta

Modeling the Impact and Intervention of a Sexually Transmitted Disease: Human Papilloma Virus

Description: Many human papilloma virus (HPV) types are sexually transmitted and HPV DNA types 16, 18, 31, and 45 account for more than 75% if all cervical dysplasia. Candidate vaccines are successfully completing US Federal Drug Agency (FDA) phase III testing and several drug companies are in licensing arbitration. Once this vaccine become available it is unlikely that 100% vaccination coverage will be probable; hence, the need for vaccination strategies that will have the greatest reduction on the endemic prevalence of HPV. This thesis introduces two discrete-time models for evaluating the effect of demographic-biased vaccination strategies: one model incorporates temporal demographics (i.e., age) in population compartments; the other non-temporal demographics (i.e., race, ethnicity). Also presented is an intuitive Web-based interface that was developed to allow the user to evaluate the effects on prevalence of a demographic-biased intervention by tailoring the model parameters to specific demographics and geographical region.
Date: May 2006
Creator: Corley, Courtney D.

Multi-Agent Architecture for Internet Information Extraction and Visualization

Description: The World Wide Web is one of the largest sources of information; more and more applications are being developed daily to make use of this information. This thesis presents a multi-agent architecture that deals with some of the issues related to Internet data extraction. The primary issue addresses the reliable, efficient and quick extraction of data through the use of HTTP performance monitoring agents. A second issue focuses on how to make use of available data to take decisions and alert the user when there is change in data; this is done with the help of user agents that are equipped with a Defeasible reasoning interpreter. An additional issue is the visualization of extracted data; this is done with the aid of VRML visualization agents. The cited issues are discussed using stock portfolio management as an example application.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: August 2000
Creator: Gollapally, Devender R.

Multi-perspective, Multi-modal Image Registration and Fusion

Description: Multi-modal image fusion is an active research area with many civilian and military applications. Fusion is defined as strategic combination of information collected by various sensors from different locations or different types in order to obtain a better understanding of an observed scene or situation. Fusion of multi-modal images cannot be completed unless these two modalities are spatially aligned. In this research, I consider two important problems. Multi-modal, multi-perspective image registration and decision level fusion of multi-modal images. In particular, LiDAR and visual imagery. Multi-modal image registration is a difficult task due to the different semantic interpretation of features extracted from each modality. This problem is decoupled into three sub-problems. The first step is identification and extraction of common features. The second step is the determination of corresponding points. The third step consists of determining the registration transformation parameters. Traditional registration methods use low level features such as lines and corners. Using these features require an extensive optimization search in order to determine the corresponding points. Many methods use global positioning systems (GPS), and a calibrated camera in order to obtain an initial estimate of the camera parameters. The advantages of our work over the previous works are the following. First, I used high level-features, which significantly reduce the search space for the optimization process. Second, the determination of corresponding points is modeled as an assignment problem between a small numbers of objects. On the other side, fusing LiDAR and visual images is beneficial, due to the different and rich characteristics of both modalities. LiDAR data contain 3D information, while images contain visual information. Developing a fusion technique that uses the characteristics of both modalities is very important. I establish a decision-level fusion technique using manifold models.
Date: August 2012
Creator: Belkhouche, Mohammed Yassine

A Multi-Time Scale Learning Mechanism for Neuromimic Processing

Description: Learning and representing and reasoning about temporal relations, particularly causal relations, is a deep problem in artificial intelligence (AI). Learning such representations in the real world is complicated by the fact that phenomena are subject to multiple time scale influences and may operate with a strange attractor dynamic. This dissertation proposes a new computational learning mechanism, the adaptrode, which, used in a neuromimic processing architecture may help to solve some of these problems. The adaptrode is shown to emulate the dynamics of real biological synapses and represents a significant departure from the classical weighted input scheme of conventional artificial neural networks. Indeed the adaptrode is shown, by analysis of the deep structure of real synapses, to have a strong structural correspondence with the latter in terms of multi-time scale biophysical processes. Simulations of an adaptrode-based neuron and a small network of neurons are shown to have the same learning capabilities as invertebrate animals in classical conditioning. Classical conditioning is considered a fundamental learning task in animals. Furthermore, it is subject to temporal ordering constraints that fulfill the criteria of causal relations in natural systems. It may offer clues to the learning of causal relations and mechanisms for causal reasoning. The adaptrode is shown to solve an advanced problem in classical conditioning that addresses the problem of real world dynamics. A network is able to learn multiple, contrary associations that separate in time domains, that is a long-term memory can co-exist with a short-term contrary memory without destroying the former. This solves the problem of how to deal with meaningful transients while maintaining long-term memories. Possible applications of adaptrode-based neural networks are explored and suggestions for future research are made.
Date: August 1994
Creator: Mobus, George E. (George Edward)

A Multi-Variate Analysis of SMTP Paths and Relays to Restrict Spam and Phishing Attacks in Emails

Description: The classifier discussed in this thesis considers the path traversed by an email (instead of its content) and reputation of the relays, features inaccessible to spammers. Groups of spammers and individual behaviors of a spammer in a given domain were analyzed to yield association patterns, which were then used to identify similar spammers. Unsolicited and phishing emails were successfully isolated from legitimate emails, using analysis results. Spammers and phishers are also categorized into serial spammers/phishers, recent spammers/phishers, prospective spammers/phishers, and suspects. Legitimate emails and trusted domains are classified into socially close (family members, friends), socially distinct (strangers etc), and opt-outs (resolved false positives and false negatives). Overall this classifier resulted in far less false positives when compared to current filters like SpamAssassin, achieving a 98.65% precision, which is well comparable to the precisions achieved by SPF, DNSRBL blacklists.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2006
Creator: Palla, Srikanth