0

Intelligent Engineering Systems through Artificial Neural Networks, Volume 16

Description | Details

Proceedings Annie Conference, November 2006, St. Louis, Missouri. The newest volume in this series presents refereed papers in the following categories and their applications in the engineering domain: Neural Networks; Complex Networks; Evolutionary Programming; Data Mining; Fuzzy Logic; Adaptive Control; Pattern Recognition; Smart Engineering System Design. These papers are intended to provide a forum for researchers in the field to exchange ideas on smart engineering system design.

  • Copyright:
    All rights reserved. Printed in the United States of America. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher. ©  2006  ASME
  • ISBN:
    0791802566
  • No. of Pages:
    1000
  • Order No.:
    802566
Front Matter PUBLIC ACCESS
PDF
  • Part I: Evolutionary Computation

    • Show Sections

      Hide Sections

      This research proposes a design method that combines a probability based decision strategy and design grammars (production rules) into a “design agent” for system development. This decision strategy can be evolved by applying a genetic algorithm (GA) to facilitate the exploration of a multi-domain design space in a topologically open-ended manner, and still efficiently find appropriate design configurations. Probability based decision making is applied at each stage of topology development. A design of a low pass filter is used as a case study. Experimental results show that a design agent demonstrates steady performance in terms of the overall fitness of its designs. A superior design agent can easily produce designs of good quality and reach the best design.
    • Show Sections

      Hide Sections

      Understanding how variation operators work leads to a better understanding both of the search space and of the problem being solved. This study examines the behavior of mutation and crossover operators in genetic programming using parse trees to find solutions to 3-parity and 4-parity. The standard subtree crossover and subtree mutation operators are studied along with two new operators, fold mutation and fusion crossover. They are studied in terms of how often and how fast they solve the problem; how much they change the fitness on average; and what proportion of variations are neutral, harmful, and helpful. It is found that operators behave differently when used alone than when used together with another operator and that some operators behave differently when solving 3-parity and when solving 4-parity.
    • Show Sections

      Hide Sections

      Optimization problems may require a non-polynomial time to find a solution, so heuristic approaches are widely adopted. Swarm intelligence (SI) is one such approach proven to be efficient. An SI system has simple agents working together to quickly accomplish a complex task. However, the efficiency does not guarantee a global optimum, because agents can be trapped at local optimums without progression. For this reason, we compute a metric to compare with the assigned threshold to determine when to escape from traps in order to improve performance. The metric will quickly approach the threshold when agents are hard to find a better solution. Once reached, the ongoing ran of the system terminates and a brand new ran begins to counter the trap. We study the traveling salesman problem (TSP) and show that under the same conditions the threshold approach often finds a better solution more promptly.
    • Show Sections

      Hide Sections

      In this paper, we present a path-planning algorithm for mobile robots in an environment with obstacles. We investigate the use of Ant Colony Optimization (ACO) for determining the optimal path for a wheeled mobile robot to visit multiple targets. The environment in which the robot operates is modeled in the form of discrete cells and the robot is modeled as a point robot. The robot has knowledge about the targets' positions but has limited local sensing capability to sense obstacles. The paper investigates the use of multiple autonomous robots for solving the shortest path problem. ACO algorithm is used for dynamic planning of the path to avoid obstacles visit all the targets.
    • Show Sections

      Hide Sections

      Predicting motion of a target will forever be an elusive goal to targeting system operators. The problem is made even more difficult when there is an intelligence that is actively attempting to elude detection. The intelligence that is tracked in this article is that of a Mobile Missile Launcher (MML) actively attempting to avoid detection and subsequent destruction by cruise missiles. The approach taken is to track the MML using swarm intelligence combined with personality traits of human operators to predict the most likely locations the MML may be. The tracking is augmented by allowing the user to interact with the swarm agents to help guide the tracking algorithm using a mathematical concept known as belief algebra. Preliminary results are shown that the swarm predicts the most likely location of simulated target. This work was initially started under a Small Business Innovative Research award to 21st Century Systems, Inc. (21CSI).
    • Show Sections

      Hide Sections

      This paper presents an exploration of Free Search (FS) and modified Differential Evolution (DE) with enhanced adaptivity. The aim of the study is to identify how these methods can cope with changes of the number of variables of a hard design test, unaided. The results suggest that both methods can adapt successfully to the changes of the number of variables and constraint conditions. The results are presented. Contributions to the engineering design are replacement in high extent of human based search with machine based and movement of the optimisation process from human guided to machine self guided search.
    • Show Sections

      Hide Sections

      We are presenting a method for improving the performance of evolutionary algorithms (EAs). It consists of an intelligent method based in human expertise to establish a fuzzy system with the purpose of making more efficient the exploration and exploitation of the landscape by increasing or diminishing the amount of individuals through generations. This is done depending on the values of two measures, the variance value, and the amount of generations in which the evolution has not had a significant increase in the fitness value of the best individual.
    • Show Sections

      Hide Sections

      This paper seeks to determine the feasibility of using fuzzy concepts in conjunction with genetic algorithms in order to solve multi-objective decision making problems. The domain chosen is the area of single machine scheduling. Another focus of this research is to test the feasibility of gene detection strategies to perform block crossover during the evolution cycle of the genetic algorithm. During block crossover, the good gene segments from the parent chromosomes are preserved in the child chromosomes. Several problems with known optimal solutions for non-zero ready times are tested using the proposed method and results are reported. The results indicate that the proposed method found the optimal solution in 20-job category with high frequency.
    • Show Sections

      Hide Sections

      In this paper, investigate several recent multiobjective Evolutionary Algorithms (moEAs), and propose a new moEA approach for bicriteria communication spanning tree problem (bSTP). The bSTP is to find a set of links with the two conflicting objectives of minimizing communication cost and minimizing the transfer delay. This problem can be formulated as the Multiobjective Minimum Spanning Tree (mMST) problem, and is NP-complete. This paper design a new multiobjective evolutionary approach to the two conflicting objectives of minimizing cost and minimizing delay, an interactive adaptive-weight assignment mechanism is used. In addition, we investigate the different encoding, crossover and mutation operators on the performance of GAs to design of MST problems. Based on the performance analysis of these encoding methods in GAs, we improved predecessor-based encoding, in which initialization depends on an underlying random spanning-tree algorithm. We compared with the recent approaches, and provide better results on larger instances.
    • Show Sections

      Hide Sections

      Advanced Planning and Scheduling (APS) includes a range of capabilities from finite-capacity scheduling at the shop floor level through to constraint-based planning in a multi-plant chain. It mainly supports the integrated, constraint-based and optimal planning of the manufacturing system to reduce lead times, lower inventories, and to increase throughput. In this paper, a novel approach for designing chromosome has been proposed to improve the effectiveness, which called multistage operation-based genetic algorithm (moGA). The objective is to find the optimal resource selection for assignments, operations sequences, and allocation of variable transfer batches, in order to minimize the total makespan, considering the setup time, transportation time, and operations processing time. The plans and schedules are designed considering flexible flows, resources status, capacities of plants, precedence constraints, and workload balance. The experimental results of various APS problems have offered to demonstrate the efficiency of moGA by comparing with the previous methods.
    • Show Sections

      Hide Sections

      A new parallel and distributed algorithm is proposed for the Traveling Salesman Problem (TSP) based upon a Multi-agent Evolutionary Algorithm (MAEA). An agent is assigned to a single city and builds locally its neighborhood - a subset of cities, which are then considered as local candidates to a global solution of TSP. The global solution of the TSP problem is based on an Ant Colonies (AC)-like paradigm. The cycles that are found by the ants are placed in data structure called Global Table, and are evaluated by genetic algorithm (GA) to modify the rank of cities in local neighborhoods. We present a description of the algorithm and show how values of some parameters of MAEA influence on the quality of solutions. We present the results of an experimental study which shows that the proposed algorithm outperforms two other currently used metaheuristics - AC and artificial immune systems.
    • Show Sections

      Hide Sections

      The conceptual design of object-oriented software is difficult to learn and perform yet has a crucial impact on subsequent downstream software development. In an attempt to support the human designer during conceptual object-oriented software design, a multi-objective genetic algorithm has been developed to search and explore the design space. Two case studies are investigated using class cohesion and size as multi-objective fitness functions, and the generated solutions are compared with those from manual designs. While cohesion values are broadly similar, the genetic algorithm also produces a variety of interesting design variants of equivalent fitness that have not been identified by manual design. These promising results, when combined with favorable performance times, suggest that the multi-objective genetic algorithms offer potential as the basis of computational tool support for interactive human / machine search and exploration of the conceptual object-oriented design space.
    • Show Sections

      Hide Sections

      Cost-based abduction (CBA) is an important problem in reasoning under uncertainty. In CBA, evidence to be explained is treated as a goal to be proven, proofs have costs based on how much needs to be assumed to complete the proof, and the set of assumptions needed to complete the least-cost proof are taken as the best explanation for the given evidence. Guided evolutionary simulated annealing (GESA) is a population-oriented simulated annealing (POSA) technique in which the amount of processing allocated to each SA process depends on the performance of that process. In previous work, we applied GESA to a suite of small CBA instances. In this paper, we present two contributions: a more accurate fitness function based on a heuristic repair technique, and an analysis of the run-length distribution (RLD) of GESA using a CBA instance that is specifically generated to be difficult.
    • Show Sections

      Hide Sections

      A Modified Simulated Annealing Algorithm (MSAA) is proposed as a statistical technique for obtaining approximate solutions to combinatorial optimization problems. The Proposed algorithm is a combination of Simulated Annealing (SA) and Hamming Scan algorithms. It combines the good methodologies of the two algorithms like global minimum converging property of SA algorithm and fast convergence rate of Hamming scan algorithm. Spread spectrum communication systems and Multiple radar systems can fundamentally improve the system performance by using a group of specially designed orthogonal binary/polyphase coded signals. But the synthesis of orthogonal codes with good autocorrelation and cross-correlation properties is a nonlinear multivariable optimization problem, which is usually difficult to tackle. In this paper MSAA is used to synthesize orthogonal coded sequence sets of good correlation properties. Some of the synthesized results presented here have correlation properties better than other known in the literature. The convergence rate of the algorithm is also good.
    • Show Sections

      Hide Sections

      Dafo, a multi-agent framework dedicated to distributed coevolutionary genetic algorithms (CGAs) is used to evaluate dLCGA, a new dynamic competitive coevolutionary genetic algorithm. We compare the performance of dLCGA to other known classes of CGAs for the Inventory Control Parameter optimization problem (ICP) and in particular show how it improves the results of the static version of LCGA.
    • Show Sections

      Hide Sections

      Technical analysts use statistics of market activity, such as past prices and volume, to identify patterns that can suggest future stock activity. Considerable skill and experience is involved in such an analysis. Recent work has explored the use of intelligent techniques to analyze and determine technical trading signals, with some success. This paper builds on previous efforts by proposing a fuzzy-neural decision making system that implements genetic optimization of the fuzzy membership functions. A base-10 genetic algorithm is used to optimize the shape of the membership functions foT determining the fuzzified values of the technical indicators. A neural network is developed to determine a final trading decision using the fuzzified values as inputs. Preliminary simulation results show improved performance when compared to the technical indicator alone, as well as an un-optimized fuzzy-neuro system using the technical indicator.
    • Show Sections

      Hide Sections

      Clustering, in data mining, is useful to discover distribution patterns in the underlying data. Clustering algorithms usually employ a distance metric based (e.g., Euclidean) similarity measure in order to partition the database such that data points in the same partition are more similar than points in different partitions. In this paper, we study clustering algorithms for data with categorical attributes. Instead of using traditional clustering algorithms that use distances between points for clustering which is not an appropriate concept for Boolean and categorical attributes, we propose a novel concept of HAC (Hierarchy of Attributes and Concepts) to measure the similarity/proximity between a pair of data points. We present a robust clustering algorithm HAC that employs hierarchy of concepts and not distances when merging clusters. Our methods naturally extend to non-metric similarity measures that are relevant in situations where a domain expert/similarity table is the only source of knowledge. For data with categorical attributes, our findings indicate that HAC not only generates better quality clusters than traditional algorithms, but it also exhibits good scalability properties.
    • Show Sections

      Hide Sections

      Function stacks are a directed acyclic graph representation for genetic programming that subsumes the need for automatically defined functions, substantially reduces the number of operations required to solve a problem, and permits the use of a conservative crossover operator. Function stacks are a generalization of Cartesian genetic programming. Graph based evolutionary algorithms are a method for improving evolutionary algorithm performance by imposing a connection topology on an evolutionary population to strike an efficient balance between exploration and exploration. In this study the parity problems using function stacks for parity on 3, 4, 5, and 6 variables are tested on fifteen graphical connection topologies with and without crossover. Choosing the correct graph is found to have a statistically significant impact on time to solution. The conservative crossover operator for function stacks, new in this study, is found to improve time to solution by 4–9 fold with more improvement in harder instances of the parity problem.
    • Show Sections

      Hide Sections

      This study shows that connectivity in graph based evolutionary algorithms is proportional to takeover times applied to standard genetic algorithms. Graph based evolutionary algorithms have been used to control the rate at which information spreads through population of solutions. Previous empirical studies have indicated that this rate of information transfer is proportional to the connectivity of the underlying graph. While this has given some insight into the differences between population structures, it is desirable to have a mathematical scheme that helps define this relationship. In this study, we calculate the takeover time for a superior solution present in a population for the complete and cycle graphs and compare it with results previously determined using empirical methods. Using these two graphs as extreme cases for comparison, it is shown that as the connectivity of the underlying graph decreases, the takeover time for a solution on a graph based evolutionary algorithm increases, confirming mathematical intuition for graph selection for a desired level of diversity preservation.
    • Show Sections

      Hide Sections

      The feature selection problem is considered in this paper. The new methods of feature selection based on genetic and ant colony optimization algorithms are proposed. The software realizing these methods was developed. It allows to reduce a time of decision search and can be used for neural network model synthesis. The results of airengine detail hardening coefficient modeling based on the proposed methods are shown.
    • Show Sections

      Hide Sections

      The use of new methods for handling incomplete information is of fundamental importance in engineering applications. We simulated the effects of uncertainty produced by the instrumentation elements in type-1 and type-2 fuzzy logic controllers to perform a comparative analysis of the systems' response, in the presence of uncertainty. We are presenting an innovative idea to optimize interval type-2 membership functions using an average of two type-1 systems.
    • Show Sections

      Hide Sections

      If skip action (ISAc) lists are a linear genetic programming data structure that can be used as an evolvable grid robot controller. In this study ISAc lists are modified to run multiple control threads so that a single ISAc list can control multiple grid robots. The threaded ISAc lists are tested by evolving them to control 20–25 grid robots that all must exit a virtual room through a single door. The evolutionary algorithm used rapidly locates a variety of controllers that permit the room to be cleared efficiently.
    • Show Sections

      Hide Sections

      As new species arise from biological evolution, new techniques arise when an evolutionary algorithm is used to train virtual robots. This study generalizes the ISAc list encoding for virtual robots working on the Tartarus task and tests a biologically inspired algorithmic technique called hybridization. A collection of 100 populations of virtual robots is trained in three distinct ways. A baseline experiment performs 1000 generations of evolution on each population. Two other collections of runs stop at intermediate points, harvest the currently best controllers from each population, and initialize new runs with the harvested controllers from all of the populations. This harvesting and mixing of evolutionary lines is called hybridization. In one experiment hybridization is performed at generation 500; the other performs it at generations 250, 500, and 750. The goal is to assess the utility of hybridization for the Tartarus task and to compare more and less frequent use of hybridization. Hybridization is found to substantially improve the training of the Tartarus agents, and more frequent hybridization yields greater benefits. The study reports a new maximum cross-validated fitness for Tartarus of 8.49.
    • Show Sections

      Hide Sections

      The Tartarus task is a moderately difficult test problem in evolutionary robotics. There is a lack of other, closely related problems that can be used to provide additional challenges to an evolutionary robotics training system. In this paper the Tartarus task is made more difficult by putting a central obstruction on the board. Comparison is made with the original Tartarus task and the cross-problem competence of evolved robots is tested. This latter experiment checks the evolutionary mutual information for the two tasks, the degree to which training on one task pre-adapts evolved controllers to another. It is found that obstructed Tartarus requires roughly 50% more moves and that the two problems have evolutionary mutual information.
    • Show Sections

      Hide Sections

      A robot navigation system based on combined two fuzzy logic controllers is developed for a mobile robot. Eight ultrasonic sensors, a GPS sensor and two fuzzy logic controllers with separate 81 rules were used to realize this navigation system. The data from sensors are used to the input of each fuzzy controller. The outputs of fuzzy controllers control the speed of two servo motors. The robot with this navigation system chooses one of two controllers based on the information from the sensors while navigating for the targets. Fuzzy controllerl has functions which are target steering, avoiding obstacles and following the edge of obstacles. Fuzzy controlle2 has a function which makes the robot keeps following the edge of obstacles. Simulation results show that a mobile robot's escaping ability from U-shaped obstacle was improved and steering ability for a target by avoiding obstacles was also improved with this combined fuzzy logic controller.
    • Show Sections

      Hide Sections

      Nonlinear systems can demonstrate various periodic as well as chaotic behaviors. To characterize their responses for the values of control parameters, different methods have been introduced. One of these methods is algorithmic complexity measure of the phase space attractors. We have applied this method to a discrete and a continuous dynamical system. Chaotic systems are deterministic constrained systems that are typically compressible compared to random systems. To quantify the order of organization, we have applied the algorithmic complexity theory to classify the order of organization in physical systems by studying their generated patterns and computing their degree of compressibility. We have shown that the order of organization for quasi-organized system is increasing logarithmically with the evolutionary length of the system. We believe that our approach addresses one of the most acknowledged features of quasi-organized system and quantitatively locates the quasi-organized systems between highly organized and constrained-unorganized (chaotic) systems.
    • Show Sections

      Hide Sections

      This paper presents the research work on a novel nonlinear control and stability analysis of genetic networks. Genetic networks can be described by differential equations with SUM logic which has been found in many natural systems. The transcription rates of the genetic networks are important to the steady state. Though we could not measure these transcription rates, biologists have shown that they are alterable. A novel nonlinear adaptive controller is proposed to drive the network to a desired steady level with real time adjustment of these factors. Asymptotic stability proof is conducted with Lyapunov argument and simulation results show the effectiveness of the design.
  • Part II: Bio-Informatics and Computational Biology

    • Show Sections

      Hide Sections

      In this paper, we reintroduce Hierarchical Mode Analysis (HMA), which was first proposed in 1968, as a powerful clustering algorithm for bioinformatics. The ability of HMA to find a compact hierarchy of a small number of dense clusters is very important in many bioinformatics problems (for example, when clustering genes in a set of gene-expression microarrays, where only a small number of genes related to the experimental context cluster well, while the rest need to be pruned). We also present two major improvements on HMA: a faster approximation algorithm, and a novel 2-D visualization scheme for high-dimensional datasets. These two improvements make HMA a powerful and promising new tool for many large, high-dimensional clustering problems in bioinformatics. We present empirical results on the Gasch dataset showing the effectiveness of our framework.
    • Show Sections

      Hide Sections

      Advances in sequencing technology have led to an explosion in the amount of sequence data available; protein structures and functions from protein coding genes are traditionally determined using time-consuming laboratory methods, but the sheer volume of data has rendered these methods impractical on large scales. In our attempt to construct automated methods for structural and functional annotations of the Human Genome and comparative genomes, we developed new variants of self-organizing feature map algorithms for identifying transmembrane proteins. This because, in human genome, membrane proteins account for roughly one third of protein coding genes, among which, in particular, transmembrane proteins play crucial roles ranging from signal transduction, cell communication to energy metabolism. However determining a membrane protein structure is always challenging the limits of laboratory methods, needless to mention finding the complete functions. We therefore use machine learning techniques to predict transmembrane proteins from polypeptide structure information alone. Features that are useful for this task are identified as a result of a detailed analysis of various physiochemical properties of proteins. The new variants of self-organizing feature map algorithms are designed to handle the noise, class-imbalanced complex biomedical data. We evaluate the effectiveness of our classifier in discriminating residues belonging to transmembrane segments from those belonging to non-transmembrane segments, and compare our results to other popular classifiers such as decision trees and support vector machines.
    • Show Sections

      Hide Sections

      We consider developing intelligent systems to distinguish benign and malignant tumors is important for effective therapy plans on cancer patients. We investigate the expressions of the human telomerase catalytic component (hTERT) mRNA, hTERT protein, Ki-67 antigen and p27kip1 and use human pheochromocytomas and paragangliomas as our examples for the above task. Using in situ hybridization technology, we determine the hTERT mRNA expressions in malignant tumors and compare with benign tumors. In addition to determine Ki-67 antigen expressions, we also determine the hTERT by immunohistochemistry to confirm the expressions of mRNA. We find that hTERT mRNA expressions are correlated with hTERT expressions. We evaluate the expressions of these proteins and the hTERT mRNA as diagnostic markers for predicting the biological behaviors from tumors. We find that hTERT mRNA detection by in situ hybridization, hTERT expressions, and Ki-67 antigen expressions are all useful to differentiate malignant from benign phaeochromocytomas and paragangliomas. Since more than one expression is useful, it is advantageous to utilize them jointly. We therefore design intelligent systems using multiple features for the task of distinguishing benign and malignant tumors. The designed intelligent systems utilize the maximum contrast tree, multiply-labeled instance classifier and special ensemble methods we developed and proved beneficial for enhancing diagnostic accuracies.
    • Show Sections

      Hide Sections

      In this paper we use a PCA neural network to detect similarities and dissimilarities between any two given families of biological sequences. Traditionally, PCA is a transformation technique used to reduce the dimensionality of a dataset, and transform it into a lower dimensionality space without any loss of information. In this paper we use PCA as a measure to detect similarities and dissimilarities between families of sequences. This is in contrast to detecting similarities between two sequences and between a sequence and a family. Of course, this method could be generalized for any datasets, and it will not be restricted for families of sequences. We propose a novel algorithm which can be used as similarity measure; we call it a PCA-neural network-based similarity measure. The performance of the proposed measure shows robustness and accuracy in similarity detection.
    • Show Sections

      Hide Sections

      Spiking neuron systems are based on biologically inspired neural models of computation since they take into account the precise timing of spike events and therefore are suitable to analyze dynamical aspects of neuronal signal transmission. Further more, they are closer to biophysical models of neurons, synapses, and related elements and their synchronized firing of neuronal assemblies could serve the brain as a code for feature binding and pattern segmentation.In this research we develop a Spiking Neuron Simulator (SNS) with seven input generators (Sin Waves, Absolute Sin Waves, Random Noise, Discrete Noise, Pulse EXP, Pulse CST, and Square Wave) and three membrane potential outputs: Integrate & Fire, Spike Long Potential, and Spike Short Potential Experiment. SNS includes the following four layers: sensory, sub-cortical 1, sub-cortical 2, and cortical layer. Our SNS simulator demonstrates the neurons ability to fire on a particular threshold, as well as, how many times and at what intensity the spikes would fire within a particular time period and a particular constant input.
    • Show Sections

      Hide Sections

      Domains are conserved sequence regions in proteins. A protein often contains one to three domains, and the protein function may be inferred from the domain information. While many protein domains have been functionally characterized, the functions of some conserved sequence regions are still poorly understood. The objective of this work is to facilitate the functional annotation of domains using protein-protein interaction data. Our assumption is that if two or more domains co-occur in protein-protein interactions, these domains may be involved in the same biological process. In this work, we first investigate several association rule mining techniques for finding domain correlations. We then propose a new measure, proximity, for mining domain association rules. Finally, we discuss the potential uses of these rules for understanding domain functions.
    • Show Sections

      Hide Sections

      Almost all cardiac abnormalities manifest themselves as an aberration in the normal phonocardiogram (PCG). Such aberrations are called murmurs, and locating a cardiac murmur in the PCG, to study physical characteristics of the underlying abnormality, is the first step in computer assisted cardiac diagnosis. In this paper we present a method to quantify visual simplicity of heart sounds that is independent of the sound absolute amplitude and frequency, while resulting in better characterization of murmurs. Then, we show that the simplicity of the systolic time interval assumes well defined patterns in the presence of systolic murmurs that can be used to classify them according to their location in systole and their most probable mechanism of production.
    • Show Sections

      Hide Sections

      One of the essential aspects of cluster analysis is the accurate determination of the number of clusters in any arbitrary dataset with no a-priori information about their grouping structure. A simple yet powerful algorithm is proposed and investigated in this paper to accurately determine the number of clusters in any arbitrary dataset composed of “reasonably separated” data clusters. Verification outcomes are provided to demonstrate satisfactory performance of the algorithm on a wide and representative variety of artificial datasets.
    • Show Sections

      Hide Sections

      Photo-plethysmography (PPG) is frequently used in research on microcirculation of blood. It is a non-invasive procedure and takes minimal time to be carried out. Usually PPG time series are analyzed by conventional linear methods, mainly Fourier analysis. These methods may not be optimal for the investigation of nonlinear effects of the hearth circulation system like vasomotion, autoregulation, thermoregulation, breathing and vessels. The wavelet analysis of the PPG time series is a specific, sensitive nonlinear method for the in vivo identification of hearth circulation patterns and human health status. This nonlinear analysis of PPG signals provides additional information which cannot be detected using conventional approaches. The wavelet analysis has been used to study healthy subjects and to characterize the health status of patients with a functional cutaneous microangiopathy which was associated with diabetic neuropathy. The non-invasive in vivo method is based on the radiation of monochromatic light through an area of skin on the finger. A Photometrical Measurement Device PMD has been developed. The PMD is suitable for non-invasive continuous on-line monitoring of one or more biologic constituent values and blood circulation patterns.
    • Show Sections

      Hide Sections

      Advances in datamining have allowed for the development a new class of diagnostic tools for use by clinical practitioners in critical care situations. These tools enhance the clinician's diagnostic expertise by providing additional information based on a specific case being evaluated. This paper presents a comparison of the diagnostic capabilities in three situations; the clinician's assessment, the diagnostic tool's assessment, and the assessment made by combining the clinician's diagnosis with the information contained in tool. The tool, a Bayesian Network generated using a Genetic Algorithm based search framework, utilized a dataset created from initial case data collected in an emergency room setting. The results indicate that while the clinician out performs the tool in ruling out ACS, there is a significant improvement overall when the clinician's assessment is added to the tool's input.
    • Show Sections

      Hide Sections

      Gray Code Optimization (GCO) algorithm is a deterministic algorithm based on the Gray code representation. It sometimes suffers from slow convergence and sub-optimal solutions. Expectation Maximization (EM) algorithm is used to analyze how the GCO explores the search space. The results indicate that it is similar to generating samples with a mixture Gaussian distribution. Based on these findings, a novel stochastic optimization algorithm based on the mixture Gaussian model is proposed. The new algorithm is applied to molecule conformation search. Obtaining global minimum energy conformations of molecule is a very hard optimization problem. The difficulty arises from the following two factors: the conformational space of a reasonable size molecular is very large, and there are many local minima that are hard to sample efficiently. The energy landscape in the conformational space is very rugged, and there are many large barriers between local minima.
    • Show Sections

      Hide Sections

      Antibiotics have been given to production animals for several decades as a performance enhancer. For nearly as long there has been a concern that using these antimicrobials in production animals could lead to bacteria developing resistance to antibiotics and eventually be exposed to. While this risk is still undefined, it would be of benefit to minimize the ratio of resistant bacteria to susceptible bacteria while still maintaining the benefits of administering performance enhancers to livestock. In this research we use a graph based evolutionary algorithm to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use while minimizing any risk from the presence of resistant bacteria. Previous work investigated the effect on campylobacter.spp only. This study examines different regimens of Tylosin Phosphate use on all bacteria populations, divided into Gram positive and Gram negative types, with a focus on campylobacter.spp.
    • Show Sections

      Hide Sections

      Logistics Regression is used by a large community in the computer aided diagnosis (CAD) of cancer. However, very little work is presented regarding the comparison of LR to other statistical learning theory paradigms, such as SVMs. Preliminary conclusions demonstrate that both LR and EP derived SVMs do provide comparable results even though the SVM diagnostic Az is slightly more accurate. However, a surprising preliminary result is that the important mammogram discriminators of breast composition, age and family history are not significant contributors to an accurate diagnosis, as demonstrated by the Chi Squared values and verified by both diagnostic paradigms. Investigation is currently ongoing to ascertain the reason for this unexpected result.
    • Show Sections

      Hide Sections

      ALFRED (ALlele FREquency Database) is a curated repository (Department of Genetics, Yale School of Medicine) of allele frequency data on 494 anthropologically defined human populations, for over 1600 polymorphisms (genetic variation). However, the matrix of population and polymorphism data is sparse, with data collected for only about 6% of the possible population-polymorphism combinations.When population geneticists seek to find the largest possible subset of populations and polymorphisms for which complete data exist, the researchers are confronted by a problem equivalent to the maximum biclique problem in graph theory. That problem has been shown to be NP-Complete (Peeters, 2003), or computationally intractable. A heuristic approach can be used for smaller problems of this type (Alexe et al., 2004), but this, too, fails when confronted with the full ALFRED database.The authors have developed a genetic algorithm to evolve good, if not provably optimal, solutions to the problem of finding the largest subset of the ALFRED database for which complete data exist. In addition, the authors have discovered a short-cut heuristic for finding large subsets of complete data quickly (approximately 10 minutes) using commonly available computers.
    • Show Sections

      Hide Sections

      Biological processes are random in nature and studying them through laboratory experiments alone is costly and time-consuming. Applying mathematical modeling provides a more efficient method for understanding and modifying them. Recently we proposed an algorithm Box to assist in modeling and controlling any biological pathway. In the Box algorithm, the output (protein concentration) control and optimization is carried out by applying simulated annealing to selective high-sensitivity reactions. Here we apply an alternative approach, a genetic algorithm, in place of simulated annealing, in the Box algorithm. The improved Box algorithm compares outputs derived by simulated annealing with outputs derived by the genetic algorithm and chooses the better set of values. We thus provide a tool to guide the modification of biological pathways that is both accurate and easy to apply. We illustrate our technique using the TNFα-mediated NF-κB pathway.
    • Show Sections

      Hide Sections

      Protein-DNA interactions play important roles in gene regulation, DNA metabolism and chromatin formation. Although structural data are available for a few hundreds of protein-DNA complexes, the molecular recognition pattern is still poorly understood. With the rapid accumulation of sequence data from many genomes, it is desired to build a classifier using the structural information and then use the classifier to identify potential DNA-binding residues in protein sequences. In this work, we have trained support vector machines (SVMs) as well as artificial neural networks (ANNs) with five sequence-derived features, including the molecular mass, hydrophobicity index, side chain pKa value, solvent accessible surface area and conservation score of an amino acid. It is found that SVMs outperform ANNs for this supervised learning task. The best SVM classifier can predict DNA-binding residues at 70.32% sensitivity and 75.83% specificity. Efficient online prediction of DNA-binding residues is available through the BindN web server (http://bioinformatics.ksu.edu/bindn/).
    • Show Sections

      Hide Sections

      The Receiver Operating Characteristic (ROC) curve can be thought of as a projection of a 3 dimensional trajectory through some True positive, False positive, and decision threshold space onto the True positive, False positive plane. Using this frame of reference, Alsing, et a1. (2002) developed a metric for the comparison of classifiers. It was shown that this metric appears to capture information concerning the robustness of a classifier. In this paper we investigate further the notions proposed in Alsing et al. relative to the robustness of total classification accuracy at a point of interest. Investigations are conducted with the application of several common classification methods on binary classification problems using breast cancer data from the University of Wisconsin.
    • Show Sections

      Hide Sections

      Many machine learning methods focus on the quality of prediction results as their final purpose. Spline kernel based methods attempt to provide also transparency to the prediction identifying features that are important in the decision process. In this paper, we present a new heuristic for computing efficiently sparse kernel in SUPANOVA. We applied it to a benchmark Boston housing market dataset and to socially important problem of improving the detection of heart diseases in the population using a novel, non-invasive measurement of the heart activities based on magnetic field produced by the human heart. On this data, 83.7% predictions were correct, exceeding the results obtained using the standard Support Vector Machine and equivalent kernels. Equally good results were achieved by the spline kernel on a benchmark Boston housing market dataset.
    • Show Sections

      Hide Sections

      In this paper, we analyze the performance of Time Delay Neural Networks (TDNN) and Hidden Markov Models (HMM) for Electroencephalogram (EEG) signal classification. The specific focus of this study is Brain-Computer Interfacing (BCI), where near-real time detection of mental commands during a multi-channel EEG recording is desired. We argue that HMM and TDNN should be preferred over the rigid, one-size-fits-all methods of the more traditional EEG signal classifiers. To analyze the utility of modern classification methods for BCI, we compare and discuss the performance of our suggested TDNN and HMM EEG classifiers with the reported best results on BCI 2003 EEG benchmark dataset Ia.
    • Show Sections

      Hide Sections

      This article proposes a method for modeling and classification apply on the uterine contractions in the electromyogram (EMG) signal for the detection of preterm birth. The frequency content of the contraction changes from one woman to another and during pregnancy. Firstly we apply an AR model on the Uterine EMG signal for the calculation of the ai parameters. Wavelet decomposition is used to extract the parameters of each simulated contraction, and an unsupervised statistical classification method based on Fisher test is used to classify the signals. A principal component analysis projection is then used to evidence the groups resulting from this classification. Results show that uterine contractions may be classified into independent groups according to their frequency content and according to term (at the recording, or at delivery).
    • Show Sections

      Hide Sections

      Studies of human perception show that human observers can almost instantaneously recognize biological motion patterns. These patterns can be decomposed into identifiable elements, which we organize in a hierarchical structure. The objective of this research is to find such a motion hierarchy using the visual information collected through motion capture and the learning algorithms of Self-organizing Maps (SOM). Such structure is sought to provide a machine with the ability to recognize biological motion patterns and to control the generality, precision, and realism in generating motion models for clinical studies, animation, and other disciplines.
    • Show Sections

      Hide Sections

      The purpose of this study was to develop a computer-based second opinion diagnostic tool that could read microscope images of lung tissue from resection and lung cells from needle biopsies, and then classify these tissue samples as normal or cancerous. This problem can be partitioned into three areas: segmentation, feature extraction and measurement, and finally classification. One component of this research is to introduce the Mean Shift segmentation algorithm as a prior stage to a kernel-based extension of the Fuzzy C-Means clustering algorithm that provides a coarse initial segmentation. This process is followed by heuristic-based mechanisms to improve the accuracy of the segmentation. The segmented images are then processed to extract and quantify features. Finally, these measured features are used by a Support Vector Machine (SVM) to classify the tissue sample as cancerous or non-cancerous. The performance of this approach was tested using a database of 85 images collected at the Moffitt Cancer Center and Research Institute. These images represent a wide variety of normal lung tissue samples, as well as multiple types of lung cancer.
    • Show Sections

      Hide Sections

      Communications systems frequently use a technique known as dual-tone multifrequency dialing (DTMF). Many possible solutions have been proposed for the design of DTMF decoders. Hardware-based solutions use filter banks to separate signals into low-band and high-band channels and to detect specific frequencies. Outputs from the filters are used to reconstruct the depressed pushbutton. These designs often increase the parts count, which results in larger circuit board space and increased costs. Software-based approaches mainly use FFT algorithms or filter banks implemented on a DSP. These designs have complex algorithms with large computational requirements. This paper discusses an approach to develop an efficient neural network based implementation of a DTMF decoder.
    • Show Sections

      Hide Sections

      In this paper, we present a simulation and hardware realization on a compact electronic chip of the Growth Hormone secretion mechanism extending a previous model [1]. After realization, the chip is a stand-alone electronic device that mimics the mechanism. The chip is a Field Programmable Gate Array (FPGA). The mechanism is developed based on a modified bio-mathematical nonlinear delay differential equation model [3.][6]. The model explores the effects of Growth Hormone Releasing Hormone (GHRH), Somatotropin Release-Inhibiting factor (SRIF), and Ghrelin (GHS) on Growth Hormone (GH). A Computer Aided Design (CAD) package that includes a Hardware Description Language (HDL) simulator has been implemented to write and compile the HDL code, test the code, verify the simulation, and synthesize the code into Register Transfer Level (RTL) logic blocks. After Synthesis, the logic blocks are downloaded onto the FPGA. Our results indicate the output of the chip and the mathematical simulation are identical.
  • Part III — Infrastructure Systems Engineering

    • Show Sections

      Hide Sections

      The objective of this research study was to develop artificial neural network (ANN)-based models for nondestructively assessing the condition of the concrete (rigid) pavement systems. The backcalculated layer properties consisted of concrete pavement layer modulus (EPCC), coefficient of subgrade reaction (ks), and radius of relative stiffness (1) for rigid pavements. The input parameters used in the analyses are the Falling Weight Deflectometer (FWD) deflection basin data and the thickness of the concrete pavement layer (hPCC). The effect of the load transfer efficiency (LTE) on the interior loading deflections was also investigated in this study and the results were summarized. In order to develop ANN-based backcalculation models for rigid pavement systems, ISLAB2000 finite element (FE) model solutions were utilized to generate the synthetic database using some 7,800 runs. The ISLAB2000 solutions and ANN model predictions showed excellent agreement with each other. Based on the results of the analysis, the developed ANN models have significant potential to predict the concrete pavement layer modulus, coefficient of subgrade reaction, and radius of relative stiffness with high accuracy.
    • Show Sections

      Hide Sections

      This paper describes the use of Artificial Neural Networks (ANNs) as pavement structural analysis tools for the rapid and accurate prediction of layer parameters of asphalt overlaid Portland Cement Concrete (PCC) composite pavements subjected to typical highway loadings. The DIPLOMAT program was used for solving the deflection parameters of composite pavements. ANN models trained with the results from the DIPLOMAT solutions have been found to be practical alternatives. The trained ANN models in this study were capable of predicting Asphalt Concrete (AC) and PCC moduli, and coefficient of subgrade reaction (ks) with low Average Absolute Errors (AAEs). ANN backcalculation models were also capable of successfully predicting the pavement layer moduli from the Falling Weight Deflectometer (FWD) deflection basins and they may be used in the field for rapidly assessing the condition of pavement sections during the FWD testing. The developed method was successfully verified using results from Long-Term Pavement Performance (LTPP) FWD tests conducted at US29, Spartanburg County, South Carolina.
    • Show Sections

      Hide Sections

      The primary objective of this study was to assess the pavement structural deterioration based on Non-Destructive Test (NDT) data using an Artificial Neural Networks (ANN) based approach. ANN-based prediction models were developed for rapid determination of flexible airfield pavement layer stiffnesses from actual NDT deflection data collected in the field in real time. For training the ANN models, ILLI-PAVE, an advanced finite-element pavement structural model which can account for non-linearity in the unbound pavement granular layers and subgrade layers, was employed. Using the ANN-predicted moduli based on the NDT test results, the relative severity effects of simulated Boeing 777 (B777) and Boeing 747 (B747) aircraft gear trafficking on the structural deterioration of National Airport Pavement Test Facility (NAPTF) flexible pavement test sections were characterized.
    • Show Sections

      Hide Sections

      The fatigue behavior of asphalt concrete is very complicated that a comprehensive fundamental theoretical model is not available. Therefore, a reliable empirical method for predicting fatigue life based on experimental data remains a desirable approach. However, the complexity of the fatigue process and the noise associated with the fatigue test results make even the traditional empirical methods, such as regression analysis, handicapped in producing a sufficiently accurate model. Artificial neural networks (ANNs) have the ability to derive considerable complex relationships and associations from experimental data while filtering out the effect of noisy data. In this study, the potential use of ANNs for fatigue life prediction was explored and the comparisons between ANN-based model predictions and predictions via multi-linear as well as other published models showed that ANN-based models provide much more accurate predictions.
    • Show Sections

      Hide Sections

      This paper describes the development of a new sensor technology for the condition assessment of the civil infrastructure. This study explores high-performance Electromagnetic Acoustic Transducers (EMATs) sensors that are capable of launching and receiving ultrasonic pulses in prestressing strands embedded in concrete. The effect of increasing levels of damage in the strand on ultrasonic wave propagation characteristics are monitored using the EMATs. Collected data was used to develop and train artificial neural networks (ANNs), which have the capability of modeling non-linear and noisy data sets. Initial results demonstrate the potential of a health monitoring technique for prestressing strands embedded in concrete.
    • Show Sections

      Hide Sections

      As worldwide infrastructure has been aging quickly, long-term structural health monitoring (SHM) has been attracting significant attention as a potential tool to prevent catastrophic structure failures and improve bridge management. Strain based damage detection is highly attractive for long term on-line SHM because strain is easy to measure and is well understood by bridge engineers. This paper presents a novel strain based structural damage detection method using a new damage indicator, called the event-based Cumulative Live Load Strain Energy Density (CLLSED). This indicator is used to overcome the difficulties associated with the fact that the damage influence zone is typically very small. CLLSED enhances the sensitivity of strain measurements through mathematical integration, and its effectiveness in detecting small damages is shown through in-depth analysis and simulation.In this work, an artificial neural network is used as the mathematical engine powering the mapping between the measurable structural response (CLLSED in this case) and the indeterminate features of structural damage (damage severity and damage location). Feedforward back-propagation supervised ANNs were trained to predict structural damage severity and location in three stages. During the first stage, an ANN was used to predict structural live load condition, which is represented as truck type and truck weight. Inputs to this ANN were the normalized CLLSED values that were derived from the strain data collected by multiple strain sensors. The goal of the second stage is to detect the occurrence of damage and to predict the associated damage severity. The truck type and truck weight obtained from the first stage along with the normalized CLLSED were used as the network input vector. During the final stage, the damage severity, truck type, truck weight, and the normalized CLLSED were combined as the input vector for damage location identification. Synthetic ANN analysis results are used to demonstrate the effectiveness of the proposed diagnosis strategy in this paper.
    • Show Sections

      Hide Sections

      Adding chemical agent to stabilize problematic highway subgrade soil is a common engineering practice in the United States. Due to the fact that theoretical accomplishments in soil chemical stabilization lag far behind the engineering practice, laboratory testing, which is expensive and time-consuming, is almost always necessary to determine the effectiveness of the soil stabilizer in enhancing engineering properties of the soil. Over the years, large amount of valuable data from laboratory tests on stabilizing different soils with different chemical stabilizers was accumulated in the literature. Efforts to extract the relationships and associations from the existing test data in order to provide guidance for new soil chemical stabilization cases were carried out for many years, however, due to the technology (statistic regression) limitations, reliable models are still not available. In this paper, Artificial Neural Network (ANN) approach to study soil chemical stabilization was introduced. An ANN model to predict the unconfined compression strength (UCS) of the stabilized soil was built based on the experimental data from stabilizing three representative Kansas embankment soils with five chemical stabilizers. The results showed that the trained ANN model could precisely predict the UCS of stabilized soil. Furthermore, ANN model enables us to study the significance of each input factor, thus providing a powerful tool for optimizing the mixture and construction design.
    • Show Sections

      Hide Sections

      The Mau-Lo Creek cable-stayed bridge, located in central Taiwan, is mainly composed of a particular steel pylon with the parabolic arch shape and twin steel girders on the plane of the Clothoid curve. Since the bridge experiences a complicated structural characteristic, the structural health diagnosis is indeed necessary. This study proposes a structural health diagnosis method for cable-stayed bridge using Expert Group Neural Networks (EGNN) and field measurement data. The EGNN were used to analyze reversely the cor-responding axial forces of all the stayed cables using three sets of rotation measured of the pylon with easier and more efficient while solving this kind of inverse problem. Based on the cable force evaluated, the structural behavior including the deformation and stress state of the bridge can be traced successfully. The proposed optimization procedure is used to determine the appropriate axial force combination within the thirty-six stayed cables of Mau-Lo creek cable-stayed bridge finally.
    • Show Sections

      Hide Sections

      According to the Wisconsin Dept of Transportation, many people die in alcohol-related traffic crashes every year. In 2002 alone, 290 people were killed in alcohol-related traffic crashes in Wisconsin. A thorough knowledge of the factors leading to alcohol-related traffic crashes is required in the development of safety improvement plans to minimize the occurrence of these crashes. In addition, knowledge of the timing for any safety improvement requires good assessment of the current and future alcohol-related crashes within a given highway system.A three-layer neural network model is presented in this paper to predict alcohol-related traffic crash rates based on licensed drivers and registered vehicles for Wisconsin counties. A variety of backpropagation neural network (BPNN) topologies with different numbers of inputs and hidden neurons are investigated and their prediction performances compared. Based on the analysis of the results it is concluded that three factors (population per liquor license density, guilty outcome of adjudicated cases, total had been drinking (HBD) drivers) have the major impact on predicting the crash rate. It is also noticed that the proper choice of training process and neural network (NN) parameters significantly affect the prediction accuracy. In addition, the NN model predicts crash rate based on licensed drivers better than rate based on registered vehicles.
    • Show Sections

      Hide Sections

      A strain sensor network is evaluated using artificial neural networks (ANN) to perform traffic monitoring. The approach uses strain profiles to determine the number of vehicles and the weight of each vehicle. In this simulation study, the vehicles travel a constant speed across a three-bay truss bridge. Each vehicle may enter the bridge at independent times and up to four vehicles may travel the bridge. The training and testing data for truss member strain is obtained from a finite element model that incorporates multiple, single-direction rolling loads. Strain profiles from the truss member elements serve as inputs to multi-layered feed-forward ANNs trained using backpropagation training algorithms (BPNN). The proposed system can be scaled for a more complex traffic scenario and truss bridge.
    • Show Sections

      Hide Sections

      Models to predict travel time on arterials in both congested and non-congested conditions were developed using State-Space Neural Network (SSNN) models and easily observable, conveniently available traffic variables. Mean absolute percentage error of modeled travel time was about 9% of observed travel time values for through traffic movements. Results show that SSNN can provide a framework for accurate, efficient, and robust travel time prediction.
    • Show Sections

      Hide Sections

      Syndromic Surveillance is the study of public health data with the purpose of discovering patterns in the data that may be indicative of public health issues such as infectious disease. This study looks at the use of a supervised artificial neural network to detect aberrations in the pattern of visits to emergency departments. To facilitate an understanding of how well the neural network technique is faring, a comparison was run between it and a selection of statistical techniques used by other researchers in syndromic surveillance. The other systems used for comparison are RODS and EARS, both of which are popular choices for public health professionals.
  • Part IV — Pattern Recognition

    • Show Sections

      Hide Sections

      This paper introduces a new metric for evaluating the quality of images that have been processed by a segmentation algorithm which replaces the pixels in each segment with pixels of a single color. Segmentation algorithms divide an image into homogeneous regions; they are useful in image processing. The novel segmentation algorithm used here is an evolutionary algorithm that evolves weighted Voronoi tessellations. This algorithm optimizes zero-variance image segments that capture irregular feature boundaries more effectively than the traditional segmentation algorithms. Many robust metrics are available for regular-shaped blocks with non-zero variance in each block. The values returned by these metrics for zero-variance image segments are inconsistent with perceived image quality. To address these limitations, a new image quality metric, Decision Quality Index of Images (DQII) is proposed. DQII takes into consideration the spatial frequency change, the luminance distortion, and the color variance between the original image and the segmented image.
    • Show Sections

      Hide Sections

      Advances in sensor and GPS technologies make possible better guidance tools for vehicle/aerial navigation. In the work reported here we look at the detection of hazards, such as aircraft or other objects, on or near the target runway. Our system consists of two modules: 1) regions of interest (ROI) detection, and 2) hazard recognition. One of the harder problems in object recognition is to segment the target object from a cluttered background. In this system we use a “poor man's” segmentation, by taking advantage of the fact that we have an approximate reference that we can differentiate from. The regions of interest are defined as significant differences between the input image and the reference image. Since this differencing can be complex due to the fact that the images may not be precisely registered, we employed a novel histogram method which is reasonably invariant to spatial transformations for ROI detection.
    • Show Sections

      Hide Sections

      This paper presents a novel fusion method to evaluate fingerprint quality by combining both local and global features of a fingerprint image in frequency and time domain. A ring structure of DFT magnitude, directional Gabor features, and back pixel ratio of central area are applied. These features are the most efficient indexes for fingerprint quality assessment. Though additional features could be introduced, their slight improvement in performance will be at the expense of complexity and computational load. Thus, each of the features are first employed to assess fingerprint quality, their evaluation performance are also discussed. Then the suggested fusion method of the features is presented to obtain the final quality scores. Experimental on our public security fingerprint database demonstrate that the proposed scheme fusion can detect fingerprint of poor quality accurately.
    • Show Sections

      Hide Sections

      Automatic fingerprint identification and classification has become one of the most important biometric technologies and drawn a substantial amount of attention in the past 10 years. Fingerprint recognition entails the extraction of patterns of ridges and furrows from the surface of a fingertip. The uniqueness of a fingerprint can be defined by a set of local ridge characteristics and their relationships. Currently more than a hundred of these characteristics and relationships, which are called minute details, have been identified. Among them, ridge ending and ridge bifurcation are the most commonly used features in fingerprint identification. Since automatic fingerprint recognition depends heavily on the quality of input fingerprint image, and moreover, about 10% fingerprint images have relatively poor quality, it is necessary to enhance the quality of the input image to facilitate reasonable recognition and differentiation in the identification process. This paper uses a fuzzy clustering algorithm for distilling clear fingerprint areas from the given fingerprint. Low pass filters are used on the local orientation image to reduce interruption and noise, and a modified anisotropic filter is applied to enhance the ridge and furrow information in the fingerprint area.
    • Show Sections

      Hide Sections

      We propose a new method for handwriting recognition that utilizes geometric features of letters. The paper deals with recognition of isolated handwritten characters using an artificial neural network. The characters are written on a regular sheet of paper using a pen, and then they are captured optically by a scanner and processed to a binary image which is analyzed by a computer. In this paper we present that new method for off-line handwriting recognition and also describe our research and tests performed on the neural network.
    • Show Sections

      Hide Sections

      Applying fuzzy ARTMAP to complex real-world problems such as handwritten character recognition may lead to poor performance and a convergence problem whenever the training set contains very similar or identical patterns that belong to different classes. To circumvent this problem, some alternatives to the network's original match tracking (MT) process have been proposed in literature, such as using negative MT, and removing MT altogether. In this paper, the impact on fuzzy ARTMAP performance of different MT strategies is assessed using different patterns recognition problems — two types of synthetic data as well as a real-world handwritten digit data. Fuzzy ARTMAP is trained with the original match tracking (MT+), with negative match tracking (MT−), and without MT algorithm (WMT), and the performance of these networks is compared to the case where the MT parameter is optimized using a PSO-based strategy, denoted PSO(MT). Through a comprehensive set of computer simulations, it has been observed that by training with MT−, fuzzy ARTMAP expends fewer resources than other MT strategies, but can achieve a significantly higher generalization error, especially for data with overlapping class distributions. Generalization error achieved using WMT is significantly higher than other strategies on non overlapping data with complex non-linear decision bounds. Furthermore, the number of internal categories increases significantly. PSO(MT) yields the best overall generalization error, and a number of internal categories that is smaller than WMT, but generally higher than MT+ and MT−. However, this strategy requires a large number of training epochs to convergence.
    • Show Sections

      Hide Sections

      This paper presents the fuzzy filter consisting of the two filters, Gaussian and Impulse noise filters for eliminating the mixed noise. For Gaussian filter, the fuzzy set called “small” is derived to represent the disorder in a pixel arising out of neighborhood corrupted with Gaussian. Next, expression for correction is developed based on the intensity of the central pixel and the membership function. Similarly, the correction for the Impulse noise is developed by finding the middle ranking pixels in the neighborhood of the central pixel. The difference between the average of the middle ranking pixels and the central pixels is used to get the membership function which when multiplied by the difference gives the correction. Next, the presence of noise is detected by finding the aggregate of four highest memberships of the neighborhood pixels. If this aggregate is more than the threshold then there is Gaussian noise otherwise impulse noise. Accordingly, the corrupted pixel will be corrected by the correction term. The results are found to be satisfactory.
    • Show Sections

      Hide Sections

      The selection of most relevant features can significantly reduce the resource requirement associated with fuzzy ARTMAP neural networks for the classification of high-dimensional data extracted from satellite imagery. This paper introduces a variant of a wrapper-type feature ranking technique that was previously proposed for ARTMAP neural networks by Parsons and Carpenter. As with the originally proposed technique, it evaluates the relevance of features based solely on between-class scatter from the geometry of internal ARTMAP categories. This paper also explores the inclusion into these feature ranking techniques of a within-class scatter measurement. Comparative simulations, performed on the Landsat multi-spectral imagery benchmark (‘Satimage’ from the StatLog repository), indicate that a significantly lower generalization error and fewer resources may be achieved by learning subsets produced by techniques that evaluate the relevance of features using both between- and within-class scatter.
    • Show Sections

      Hide Sections

      After realizing a successful automatic SOM based color segmentation of static images, we tackle the subject of video sequences. We developed an algorithm that tracks color segments (or objects) through out a video sequence and indicates at each moment their position in the scene. We track color objects ever since their appearance on the scene, during their existence and until their disappearance from the scene. We lay out some new definitions and laws in the tracking domain that facilitate the matching process and the validation of that matching. We do the matching using 5 spatial descriptors of the color objects as entities, without considering the pixels inside each object. To be valid, this match should remain the same if the video sequence is run in the forward or in the backward modes. The algorithm is able to track all objects present in the sequence and not only focusing on a pre-defined or chosen foreground, and in a very fast approach. The adopted configuration of the video sequence is that the most dominant colors of the sequence are maintained through out all the images of the sequence.
    • Show Sections

      Hide Sections

      A new biometric indicator based on the patterns of conjunctival vasculature is proposed. Conjunctival vessels can be observed on the visible part of the sclera that is exposed to the outside world. These vessels demonstrate rich and specific details in visible light, and can be easily photographed using a regular digital camera. In this paper we discuss methods for conjunctival imaging, preprocessing, and feature extraction in order to derive a suitable conjunctival vascular template for biometric authentication. Commensurate classification methods along with the observed accuracy are discussed. Experimental results suggest the potential of using conjunctival vasculature as a biometric measure.
    • Show Sections

      Hide Sections

      Artificial Neural Networks (ANNs) are increasingly used to model complex, nonlinear functions. Examining input influence additionally provides ANNs with explanatory power. Facial attractiveness is one domain that could benefit from ANN modeling as the literature shows a complex relationship between facial attributes and attractiveness. A dataset of facial images each comprising 17 features, a random variable, and rating was used to train an ANN to 0.86 classification. A contributive analysis algorithm was applied to assess the contributions and interactions of the inputs. In general, the results suggest that more feminized and asymmetrical features enhance female facial attractiveness. The results support the use of ANNs as modeling tools for facial attractiveness and for other domains of similar complexity.
    • Show Sections

      Hide Sections

      The purpose of this paper is to classify automatically musical instrument sounds on the basis of a limited number of parameters. And this involves issues like feature extraction and development of classifier using the obtained features. As for feature extraction, a 5 second audio file stored in WAVE format is passed to a feature extraction function. The feature extraction function calculates more than 20 numerical features both in time-domain and frequency-domain that characterize the sample. Regarding the task of classification, we designed a two-layer Feed-Forward Neural Network (FFNN) using back-propagation training algorithm. The FFNN is trained in a supervised manner — the weights are adjusted based on training samples (input-output pairs) that guide the optimization procedure towards an optimum. After training, the neural network is validated by analyzing its response to unknown data in order to evaluate its generalization capabilities. Then, the sequential forward selection method is adopted to choose the best feature set to achieve high classification accuracy. Our goal is mainly to classify the sound into five different musical instrument families, such as the Strings, the Woodwinds and the Brass.
    • Show Sections

      Hide Sections

      The traditional approach to data protection is to apply encryption and scrambling. These methods alone are not adequate to provide complete content-protection since they cannot protect the data once it has been decrypted and descrambled. A recent technique called digital watermarking is proposed as a complementary approach to data encryption. Digital watermarking is a process in which security or authentication data is imperceptibly embedded within a host data. In this paper, we propose a robust blind image watermarking algorithm for embedding a binary watermark sequence under the wavelet domain. Using a quantization-based modulation technique to embed the watermark, this method does not rely on having access to the original host image in order to provide watermark estimation. This algorithm demonstrates robustness to some common watermark distortions, or attacks, such as lowpass filtering, JPEG compression, and additive Gaussian noise. Performance comparisons are also given with respect to other robust algorithms of similar class.
    • Show Sections

      Hide Sections

      The work proposed in this paper utilizes Neural Networks to distinguish speech patterns. A feature extractor is used as a standard Linear Processing Coefficients (LPC) Cepstrum coder, converting the incoming speech signal captured by a Matlab interface into LPC Cepstrum feature space. A Neural Network makes each variable length LPC trajectory of an isolated word into a fixed length LPC trajectory, providing the fixed length feature vector that is fed into a recognizer. The recognizer uses a Feed Forward (FF) and Back Propagation (BP) Network approach to test the signal output for the recognition of the feature vectors of isolated words. The feature vector was normalized and de-correlated. Momentum is used to find the global minima of the error surface avoiding oscillations in local minima. The goal of the work is to consistently identify a randomly chosen speech pattern from the samples of four different speakers 100% of the time
    • Show Sections

      Hide Sections

      The present study explores quantitative analysis of relation between articulator parameter and the 2D video signal of the facial motion mainly upper part of face that occurs simultaneously in different emotions.A comprehensive database has been prepared for the analysis. 25 Hindi speaking females of age range 20 to 25 years have been trained for the expression of six emotions along with neutral emotion. Formants and facial parameters such as displacement of outer eyebrow, inner eyebrow, center eyebrow and eye opening and centre distance of two eyebrows are extracted from short frames of utterance of Hindi vowel V in respective Audio and Video signal. Finally number of quantitative correlations has been developed between articulator parameters and facial feature parameters.
    • Show Sections

      Hide Sections

      We present a new perfect reconstruction DFT/RDFT filter bank algorithm and its application in image fusion in comparison to the DWT method. In order to achieve perfect reconstruction, it is necessary to modify the input signal slightly with an invertible transformation only when the number of data points is divisible by 4. In image fusion application investigated, a SPOT panchromatic image is fused with a corresponding SPOT multispectral image with three bands. For this purpose, the approximation subband of the multispectral image is merged with the detail subbands of the panchromatic image. The fused image is recovered from the subband images by the new perfect reconstruction algorithm.
    • Show Sections

      Hide Sections

      This paper introduces a fully automatic chromosome classification algorithm for Multicolor or Multiplex Fluorescence In-Situ Hybridization (M-FISH) images using Gaussian mixture model technique. M-FISH is a recently developed cellular imaging method for rapid detection of chromosomal abnormalities, where each chromosome is labeled with 5 dyes and counterstained with DAPI florescence stain. The problem is modeled as a 24-class 6-feature pixel-by-pixel classification task. Features are composed of dyed-image pixel brightness. By conducting experiments on ADIR M-FISH database, we demonstrate that our proposed Gaussian mixture model based approach yields significantly better results over the Bayesian classifiers which assume a single Gaussian probability distribution for M-FISH data.
    • Show Sections

      Hide Sections

      In this paper, we present a scheme of steganalysis of JPEG images with the use of polynomial fitting. Based on the Generalized Gaussian Distribution (GGD) model in the quantized DCT coefficients, the errors between the logarithmic domain of the histogram of the DCT coefficients and the polynomial fitting are extracted as features to discriminate the adulterated JPEG images from the untouched ones. Experimental results indicate that our method is very successful in detecting the presence of hidden data in JPEG steganography produced by CryptoBola, F5 algorithm, and JPHS. And the proposed feature set outperforms two other well-known feature sets: Histogram Characteristic Function Center Of Mass (HCFCOM), and the high-order moment statistical model in the multi-scale decomposition using wavelet-like transform.
    • Show Sections

      Hide Sections

      Semantic analysis is on the top level of image understanding. Before doing it, analysis of the structure of the given image should be implemented. In this paper, we first study structure of images, then we build a mathematical model for the description of information contained in the image. This model is used to depict the image semantic. We have implemented a prototype base on our model, and experimental results show that the model is useful and rightly.
  • Part V — Architectures

    • Show Sections

      Hide Sections

      Coordination of large numbers of agents for performing complex tasks in complex domains is a rapidly progressing area of research. Because of the high complexity of the problem, approximate and heuristic algorithms are typically used for key coordination tasks. Such algorithms usually require tuning of algorithm parameters to get the best performance in particular circumstances. Manually tuning of parameters is sometime difficult. In this paper, we introduce a new concept of dynamic features for a neural network, called dynamic networks, to model the way a coordination algorithm will work under particular circumstances. Genetic algorithms are used to train the networks from an abstract simulation, TeamSim. At the end, the model is used to rapidly determine an appropriate configuration of the algorithm for a particular domain. Users specify required tradeoffs in algorithm performance and use the neural network to find the best configuration for those tradeoffs. Algorithm reconfiguration can even be performed online to improve the performance of an executing team as situation changes. We present preliminary results showing the approach promisingly facilitating users to configure and control a large team executing sophisticated teamwork algorithms.
    • Show Sections

      Hide Sections

      A multi-agent based cooperative algorithm is described to detect temporal consistency among events. We describe the cooperative aspects of the agent -based algorithm using an UML activity diagram.
    • Show Sections

      Hide Sections

      A cooperative multi-agent model is proposed for simulating distributed negotiation for military and government applications. This model uses the condition-event driven rule based system as the basis for representing knowledge. In this model, the updates and revision of beliefs of agents corresponds to modifying the knowledge base. These agents are reactive and respond to stimulus from the environment in which they are embedded. The distributed agent-based software architecture will enable us to realise human behaviour model environment and computer-generated forces (or computergenerated actor) architectures.
    • Show Sections

      Hide Sections

      This paper describes the application of techniques based on dynamic neural networks for fault diagnosis. Two architectures of dynamic neural networks are used. The better net is integrated in a state observer bank, where each net describes one system behavior. Training in closed loop is used. The method of fault detection and diagnosis is based on the definition of minimum errors (residues). These residues are calculated by comparing the plant outputs and each dynamic neural network output from the state observer bank, with and without faults. Finally, this technique is applied to a tanks system, and can be demonstrated that it is possible to reduce the observers bank, i.e., it is not necessary to use a neural network for each behavior of the system, with and without faults.
    • Show Sections

      Hide Sections

      We studied about number of information copies on complex networks using simulation of percolation method and real data from the World Wide Web (WWW). In recent years, many researchers reported communication networks follow particular topology that is called complex network, which include the WWW, the Internet, and social networks. Percolation method has been used to study about error and attack tolerance of communication networks. Then, most paper focus on emergence of a large cluster that can divide network to some fragments. In this research, our objective is to show distribution of the number of information copies on complex networks with various fitness of information. The information fitness represented by occupied probability p in the percolation examination. To investigate the number of copies of information, we use both number of small and large clusters. Our key result is that the number of copies on complex networks has high frequency on middle number of copies than distribution on lattice. These results express that even if information have constant fitness, the number of information copies can distribute wide range.
    • Show Sections

      Hide Sections

      Following the success of support vector machines (SVMs) and neural network (NNs) model for data classification, this study presents a knowledge-based linear classification model for single phase fluid flow in the annulus with the Reynolds number equations used as prior knowledge also for classification. This approach is achieved by simulating the experimental laminar & turbulent flow data and Reynolds number equation through a knowledge-based classification model. Classification of fluid flow patterns can be seen as a machine learning problem where the inputs are vectors of length 5 with attributes that represent the parameters which determine the fluid flow in the annulus or pipe. The classification weight includes information inherent to each flow type with respect to its critical Reynolds number, and it will be in the form of a polyhedral set. The algorithm constructs a separating hyperplane, where the weights of the hyperplane represent a scaled level of importance for each of the parameters.
    • Show Sections

      Hide Sections

      The discovery of new functions which have the study of the human body as the starting point contributes towards the evolution of Artificial Immune Systems. The Natural Immune System also tries to keep a type of record of the best antibodies so that in the future it can identify once antigens in the organism allowing a faster and more efficient response. This paper proposes what the antibodies are grouped in an organized way and from an evolutionary process the antibodies that belong to those groupings can evolve and improve the adaptive immune response to an antigen. The target of this paper is investigate a new architecture based on Immune Systems by focusing on the behavior analysis and the potentiality of the architecture. New functions observed in the biological context were studied to compose this new approach.
    • Show Sections

      Hide Sections

      A new exploratory self-adaptive derivative free training algorithm is developed. It only evaluates error function that is reduced to a set of sub-problems in a constrained search space and the search directions follow rectilinear moves. To accelerate the training algorithm, an interpolation search is developed that determines the best learning rates. The constrained interpolation search decides the best learning rates such that the direction of search is not deceived in locating the minimum trajectory of the error function.The proposed algorithm is practical when the error function is ill conditioned implying that the Hessian matrix property is unstable, or the derivative evaluation is difficult. The benchmark XOR problem (Nitta, 2003) is used to compare the performance of the proposed algorithm with the standard back propagation training methods. The proposed algorithm improves over the standard first order back propagation training method in function evaluations by a factor of 32. The improvement in standard deviation for the same metric has a ratio of 38:1. This implies that the proposed algorithm has less intractable instances.
    • Show Sections

      Hide Sections

      Optimal solution to Reinforcement Learning (RL) problems might require huge amount of resources to calculate. In this paper, we propose a methodology that applies the support vector clustering to reduce the cardinality of the set of states. Then, we solve the new RL using the reduced states. This method yields optimal and near optimal policies in less computational time. The policies obtained converge to the optimal policy and the more clusters we have the better is the policy. We used a fictional city evacuation scenario to test the efficiency of our method.
    • Show Sections

      Hide Sections

      Prototype based clustering and classification algorithms constitute very intuitive and powerful machine learning tools for a variety of application areas. They combine simple training algorithms and easy interpretability by means of prototype inspection. However, the classical methods are restricted to data embedded in a real vector space and thus, have only limited applicability to complex data as occurs in bioinformatics or symbolic areas. Recently, extensions of unsupervised prototype based clustering to proximity data, i.e. data characterized in terms of a distance matrix only, have been proposed. Since the distance matrix constitutes a universal interface, this opens the way towards an application of efficient prototype based methods for general data. In this contribution, we transfer this idea to supervised scenarios, proposing a prototype based classification method for general proximity data.
    • Show Sections

      Hide Sections

      This paper describes the application of Independent Component Analysis (ICA) and Terahertz technologies for materials identification. Similar to Principal Component Analysis (PCA), Independent component analysis is a method for finding underlying factors or components from multivariate statistical data. In materials identification problems, the goal is to separate and identify materials from mixtures. We introduce a novel pseudo-inverse ICA-based filtering algorithm to remove noise from images or signals. Both spectroscopic and chromatographic techniques can be used for materials identification. In this paper, continuous-wave Terahertz technology is used to illuminate mixtures of materials. And Independent Component Analysis is used to help separate and identify materials based on the Terahertz images. In the examples given, explosives under different covers are separated and identified using this method.
    • Show Sections

      Hide Sections

      In this paper, we consider a design method of simultaneous stabilizing modified repetitive controller. To design simultaneous stabilization controller is the problem such that a single controller C(s) is designed to stabilize all of plants Gi(s)(i = 1, …, r). To find such C(s) is called simultaneous stabilization problem. Simultaneous stabilization problem is shown in many control system design problems, for example the problem to stabilize a nonlinear system with linearized at several operating points by linear controller. Simultaneous stabilization problem is also applied to the reliable control systems for sensor failure and so on. Many of papers have been considered on simultaneous stabilization problem. However no paper examines a design method of simultaneous stabilizing modified repetitive controllers. In this paper, we propose a design method of simultaneous stabilizing modified repetitive controllers for multiple-input/multiple-output strictly proper plants.
    • Show Sections

      Hide Sections

      In this paper, we examine a design method for robust stabilizing multi-period repetitive learning controllers with the specified input-output characteristics for single-input/single-output systems. Satoh, Yamada and Mei considered the parametrization of all robust stabilizing multi-period repetitive learning controllers. However, using the parametrization by Satoh, Yamada and Mei, the input-output characteristics of the control systems cannot be settled easily. This paper aims to overcome this problem and proposes the parametrization of all robust stabilizing multi-period repetitive learning controllers with the specified input-output characteristics.
    • Show Sections

      Hide Sections

      This paper explores the validity of a method to solve the classical neural network prediction (i.e., classification) meta-problem using a combination of different neural network architectures connected in series to provide a more highly correlated solution to a bit prediction problem
  • Part VI — Data Mining

    • Show Sections

      Hide Sections

      Clustering is powerful tool used for analyzing microarray data in order to discover groups of genes sharing patterns of co-expression across a set of conditions. The results of different clustering algorithms on the same dataset may vary significantly. Different algorithms may find varied, but good quality gene clusters. Therefore, a consensus across several clustering results may be able to capture the merits of the different clustering algorithms and produce more reliable clusters. This paper proposes a novel weighted consensus clustering algorithm that is aimed at providing a good quality clustering of microarray data and evaluates it experimentally.
    • Show Sections

      Hide Sections

      The paper describes an automatic surveillance system. Special emphasis is placed on the description of the anomaly detection method developed that allows detecting novel, never previously encountered anomalies in people's behavior. The method employs Support Vector Machines to learn the model of each person's normal behavior from previously collected data. Results are described in the context of a facility surveillance application.
    • Show Sections

      Hide Sections

      In this paper, different types of machine learning classifiers, such as support vector machines (SVMs), artificial neural networks (ANNs), and linear discriminant analysis (LDA), are applied for tornado detection. All methods are used to predict which storm-scale circulations yield tornadoes based on the radar derived Mesocyclone Detection Algorithm (MDA) attributes and a month attribute. The incorporation of near-storm environment (NSE) attributes as inputs to the classifiers is investigated. The sensitivity analysis for each classifier on different ratios between tornadic and non-tornadic observations in the data sets is performed. The computational results show that SVMs have a higher performance compared to ANNs and LDA.
    • Show Sections

      Hide Sections

      A simple technique called DK-Map for network intrusion detection is presented. The DK-Map provides a computationally efficient means to construct self-organizing maps by dynamically generating neurons as dictated by the inherent order of relation in the training set. One significant advantage of this technique is its computational efficiency. The network size is dynamically determined. Earlier work by the first author proves that high-order nonlinear classifier models achieved using neural networks that use multivariate Gaussian functions and hierarchical Kohonen maps yield excellent results in detection and false positive rates. A major motivation for this work is to measure the effectiveness of DK-Map compared to the techniques mentioned above for intrusion detection. Training and testing are conducted on pre-processed network dump data and the benchmark KDD 1999 dataset. With DK-Map we obtained detection rates between 89% and 96.27% at false positive rates between 0.28% and 2.32% for network dump data with 37 and 50 neurons respectively.
    • Show Sections

      Hide Sections

      Adaptive Resonance Theory (ART) neural networks are a popular class of neural network classifiers, and they are based on the adaptive resonance theory, developed by Grossberg. ART neural networks have a number of desirable features, such as guaranteed convergence to a solution, on-line learning capabilities, identification of novel inputs, offering an explanation for the answers that they produce, and finally, achieving good performance on a number of classification problems in a variety of application areas. Two members of the class of ART classifiers that have been introduced into the literature are Gaussian ARTMAP (GAM) and Distributed Gaussian ARTMAP (dGAM). The difference between dGAM and GAM is that in its learning phase, dGAM allows more than one ART node to learn the input pattern, contrary to GAM which allows only one ART node to learn the input pattern (winner-take-all ART network. The inventors of dGAM claimed that dGAM addresses the category proliferation problem, observed by many winner-take-all ART networks, such as Gaussian ARTMAP, Fuzzy ARTMAP, Ellipsoidal ARTMAP, amongst others. The category proliferation problem is the problem where an ART network, in the process of learning the required classification task, creates more than necessary ART nodes. This category proliferation problem is more acute when the ART networks are faced with noisy and or significantly overlapping data. However the claim, that dGAM outperforms GAM by creating smaller ART networks, has not been substantiated in the literature. In this paper, a thorough experimentation and comparison of the performance of Gaussian ARTMAP (GAM) and distributed Gaussian ARTMAP (dGAM) is provided. In the process of doing so, a new measure of performance for a neural network is introduced. This measure relies on two factors of goodness of the neural network: the network's size, and the network's generalization performance (i.e., performance of the trained ART classifier on unseen data). Obviously, a small size ART network of good generalization performance is desired. Previous comparisons of ART-like classifiers relied on a trial and error procedure (that is a time consuming and occasionally unreliable procedure) to produce a good performing ART network. The proposed measure of performance allows one to come up with a good ART network through an automated and reliable process.
    • Show Sections

      Hide Sections

      In an earlier publication, an Ant Colony Optimization — Artificial Neural Networks (ACO-ANN) based algorithm for feature subset selection was presented. The algorithm employed ANNs to evaluate subsets produced by ants. It is hypothesized that the performance of the algorithm depends on the generalization ability of the training algorithm used in the ANNs. This paper tests the performance of the algorithm for different types of neural network training techniques. The results obtained demonstrate that all the results from the studied training methods are competitive and the selection of an appropriate training algorithm depends on the customer requirements and his priorities such as desired accuracy or subset with least number of features or time of computation.
    • Show Sections

      Hide Sections

      Due to the appearance of IP Telephony, the number of POTS (Plain Old Telephone Service) customers has decreased recently. However, POTS is still a major service in which the income is more than 3.5 billion yen in NTT, Japan. Several telecommunication carriers provide various types of service menus and options. We have proposed a framework of scenario simulation (FSS) to analyze service demand under competitive and complex market conditions. FSS has a function to model customer-service-choice behavior. A customer-choice-behavior modeling of POTS services based on menu choice data is presented from three points of view. Those are service providers, service menus, and service options. We construct decision-making processes and evaluate the effect of improving the accuracy of modeling. We show an evaluation example using menu-choice data sets from surveys that we conducted in 2005.
  • Part VII — Adaptive Control

    • Show Sections

      Hide Sections

      In a petroleum refining plant with a large number of high-pressure facilities, high-pressure gas leaks resulting from equipment failures can engender disasters. To prevent such accidents, technologies for early detection of leak sounds and appropriate countermeasures are indispensable. A previous work proposed chaos information criteria based on a trajectory parallel measure to analyze dynamics of acoustic time series data. That paper reported the effectiveness of chaos information criteria on high-pressure gas leak detection. This paper reports an experiment for confirmation that was carried out at the platformate distillation unit of Idemitsu Kosan Chiba refinery. Nitrogen gas was leaked artificially at nine different locations. We analyzed acoustic time series data observed using eight microphones installed in different places. Results demonstrate that gas leak detection is possible regardless of the microphone position.
    • Show Sections

      Hide Sections

      ‘Learning control’ is a term attributed to a broad class of self-tuning processes, where the performance of the controlled system, with respect to a particular task, is self-improved based on the performance of previous similar tasks. However, the critical aspect is the stability of the controlled system during online learning. This also applies to neurocontrol, where neural network learning techniques are used for controller adjustments. Here we present a nonlinear control-theoretic platform that provides stability for a class of learning neurocontrollers by establishing the maximum, stable search space for a given controller parameterization. The theory allows the adjustment of controller parameters and/or structure during the learning process.
    • Show Sections

      Hide Sections

      In this research work classical PID, fuzzy logic, and fuzzy-PID was used to implement a real-time position control for a mass-spring-damper system manufactured by Educational Control Products (ECP) available in the laboratory. The simulation was also carried out in Labview and the results were compared to the actual implementation of the real-time system. These results proved to be satisfactory and the proposed position control was achieved. The experimental setup included the hardware and the software. The hardware part of the experiment used the above-mentioned system, servo amplifier, multiQ-3 data acquisition card, I/O (terminal) board. The software part consisted of Wincon software with Matlab/Simulink. The Wincon software has Real-time workshop, VC++ and Wincon server and client.
    • Show Sections

      Hide Sections

      In this paper, we examine a design method for robust stabilizing controllers for nonlinear systems that have an undesirable equilibrium point losing approximate feedback linearization. We adopt a three-step procedure to solve this problem. First, a state transformation matrix is determined, so that the nominal nonlinear system, of which that the equilibrium point is changed, is transformed approximately into the controllable canonical form. Then, the nonlinear system is divided into the controllable canonical representation section and the remainder. Second, the standard nonlinear feedback linearization method is used to transform the controllable canonical form into linear systems. Third, using quadratic stabilizing method, we can design robust stabilizing controllers.
    • Show Sections

      Hide Sections

      In this paper we are presenting a study on developing an Artificial Muscle based on a PID Controller tuned by a Neural Network with a NN Identification of the plant. The non linear identification of the Muscle is based on a NNARX (Neural Network Auto-Regressive exogenous) (Noorgard, 2000) structure. The NN used to control the PID parameters has a special learning algorithm in order to let the output of the closed loop formed by the motor and the controller to follow that of this non linear Muscle Model. An identification of the plant using Neural Network was developed in order to reduce the mathematics calculation when the plant transfer function is complicated or not known. Two structures are developed and compared, than a comparison with results obtained in our previous works is presented.
    • Show Sections

      Hide Sections

      In this paper, we examine a design method of modified PID (Proportional-Integral-Derivative) controllers for unstable plants. PID controller structure is the most widely used one in industrial applications. If there exists a stabilizing PID controller, the parametrization of all stabilizing PID controller has been considered. However no method has been published to guarantee the stability of PID control system for any unstable plants and the admissible sets of P-parameter, I-parameter and D-parameter to guarantee the stability of PID control system are independent each other. In this paper, we propose a design method of modified PID controllers such that the modified PID controller make the control system for any unstable plants stable and the admissible sets of P-parameter, I-parameter and D-parameter are independent each other.
    • Show Sections

      Hide Sections

      DC motors are used extensively in many applications that require high starting torque such as cranes hoists, electric traction, etc. In order to control the speed of such drive system while maintaining the current at a limiting value, a fuzzy proportional plus integral speed and current controllers are designed. Simulink and dSPACE software packages are used to make the real-time implementation of controllers on real time devices like motors easy. The performance and optimal operation of devices are studied and evaluated. Using dSPACE software and DS1104 control board, the Simulink model of the DC motor is replaced with the actual motor for the real-time implementation. The ultimate goal was to control the real-time DC motor using the controllers designed in this research work.
  • Part VIII — Smart Engineering Systems

    • Show Sections

      Hide Sections

      In this paper, we propose regression analysis using fuzzy clusters obtained as the internal latent structure of the data. That is, we use fuzzy clustering to exploit its efficient power to reveal the latent classification structure of data. Several numerical examples show the good performance of the proposed method.
    • Show Sections

      Hide Sections

      This paper presents a multi-agent financial market simulation. The market is composed of traders who have different initial trading biases to take a specific action. Traders not only buy or sell an asset, but also cover their position in the following periods. Trading strategies are generated using stock price movements and other technical indicators. An XCS learning classifier system is used as an individual learning mechanism to implement the evolution of trader strategies. The results reveal that initial trader bias affects market price dynamics and evolutionary learning prevents the market from crashing, stabilizing the system. Covering mechanisms clearly illustrate the intermediate and minor trend following behaviors of traders. The results contribute to the understanding of potential deviations from efficient market equilibrium.
    • Show Sections

      Hide Sections

      The main applications of stock return forecasting systems are applied in the stock investment area. On the other hand, the main research in the area of option trading is focused more on volatility forecasting. Since options can be applied by a speculator to take a position in the market, return forecasting should be useful in option trading as well as stock investment. This study involves modeling neural networks to forecast return for the next period of time to generate option trading signals, in particular, going either long a call option and/or long a put option. Both daily and weekly trading periods are investigated to access profitability. Each model is trained and tested on historical data of the Standard Poor's Composite 500 (S&P 500) stock index (symbol: SPX) and stock index options for option trading.
    • Show Sections

      Hide Sections

      The focus of this paper is to predict a binary time series using recurrent networks. We evaluate the performance of linear classifier and neural networks such as Elman and Time Delay Neural Networks for predicting binary time series. The performance of these networks is evaluated with respect to factors such as neural network architecture (number of hidden layers and neurons, learning rates, training functions used) and lag window sizes. Both the networks performed better as compared to linear classifier and performance of Elman neural network was best among the neural networks.
    • Show Sections

      Hide Sections

      In this paper, we present a novel algorithm for inducing stochastic finite memory machines to match a given example data-set. The motivation for this approach stems for the ANNIE 2006 binary time-series prediction competition. Our approach extends the paradigm of finite memory machines into the stochastic domain, proves the applicability of our new extension to the problem of inducing unknown sequences, and presents some preliminary results on the effectiveness of this approach. Our score on the ANNIE 2006 competition data set was 19606 of 20,000 bits predicted correctly or 98.0349% accuracy. At the time of publication, this was the highest score in the competition. The best result ever on a similar but different sequence was not signficantly different (98.4%) (Ashlock, 2006).
    • Show Sections

      Hide Sections

      This paper shows the experimental study of the candlestick method in the hybrid financial forecasting models. A committee machine with the Generalized Regression Neural Network (GRNN) experts is the primary tool that handles the input data, and the candlestick method is introduced to the model using gating networks. This introduction of the candlestick method into the model improves the overall performance. For the experiment, the daily stock quotes of Exxon Mobile, General Electrics, General Motors, Google, Microsoft, and Wells Fargo are used as input data sets. The output of the model is the forecast of the next day's closing price. For the purpose of comparison, the performance of a simple GRNN-based forecasting model is shown. The results of their forecasts are evaluated on the basis of the mean squared error
    • Show Sections

      Hide Sections

      In the present globalized Competitive Business Environment, rationalization of Taxation System and its intelligent monitoring is a challenging task for the taxation authorities. This becomes difficult basically due to lack of sufficiently automated intelligent system [13]. In this paper, various models of Artificial Neural Networks (ANN)[9] has been studied & simulated with system for which taxation data have been collected and applied to the system. This process has been repeated for data of multiple years. And thus, ANN models have been trained suitably with such historical data. After appropriate training to the ANN models, sample input data has been injected into the trained model. The usual Error Back Propagation Algorithm (BPA) is used to train the network, with a sequential training scheme using shuffled patterns for a stochastic search in the weight space. The outcomes of the trained model have been studied and analyzed. On the basis of the study of those outcomes, the authors derived a conclusive opinion for adoption of ANN model in the Integrated Taxation Monitoring System (ITMS).
    • Show Sections

      Hide Sections

      This paper presents a method and an apparatus named as Internal Calibrated Digital to Analog Converter (IC-DAC), which automatically searches the best configuration (setup of switches) to achieve close-to-ideal voltage output of an extended R/2R ladder network for any digital input. The configurations are stored in a mapping matrix, which can be rewritten by an internal calibration process. A prototype with a hardware end and a software end was developed and proved to perform robustly with many times higher accuracy than traditional R/2R DAC.
    • Show Sections

      Hide Sections

      Vehicle mass moments of inertia (MMOI) and center of gravity (CG) are important vehicle design parameters that are difficult to obtain without a physical specimen or high-fidelity vehicle model. In this paper, we discuss the results of a software-based method for vehicle MMOI and CG estimation that uses a neural network approach. We show that this scheme generates reasonable results within seconds for a variety of wheeled vehicles without using a physical specimen or virtual vehicle model.
    • Show Sections

      Hide Sections

      InAs(GaSb)/AlGaAsSb devices are promising semiconductor nanosystems with signal generation and detection capabilities in the near-infrared (NIR) window of the electromagnetic spectrum. A major obstacle with synthesis of these devices is the ability to accurately achieve and control the broad range of composition and nanoscale features in complex, multi-layer structures. The sources of variance can be attributed to artifacts at the nanoscale of the device structure. Based on numerous experiments and neural network modeling techniques, our results reveal relationships between the growth parameters, material properties, and device performance parameters. These results can be used to evaluate the growth process and these complex nanostructures.
    • Show Sections

      Hide Sections

      Control workstations are used in education and industry as a test station for control algorithm evaluation. A virtual control workstation was designed in the Simulink environment with the Virtual Reality Toolbox, Real-Time Workshop, and SimMechanics and is based on the physical electrical and mechanical parameters of a Quanser Consulting system. Nonlinear parameters of the mechanical system can be easily modified in the virtual workstation to evaluate different control strategies in regard to complexity, robustness to plant variations, and stability. Experimental results from proportional, PI, and CMAC neural network controllers implemented on the Quanser workstation are compared with simulation results from the virtual station.
    • Show Sections

      Hide Sections

      An efficient algorithm to solve large-scale vehicle routing problem is presented. The algorithm is based on network flow formulation method and generates tours very close to the optimal solution. The formulation avoids exponential number of sub tour elimination constraints. The probability that an optimal solution is detected is 0.80 and in remaining cases the algorithm detects a solution with average difference of 0.69 from the optimum solution.

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In