0

International Conference on Computer and Automation Engineering, 4th (ICCAE 2012)

  • Author(s)/Editor(s):
  • Published:
    2012
  • DOI:
    10.1115/1.859940
Description | Details

Proceedings of the 2012 International Conference on Computer and Automation Engineering, 4th (ICCAE 2012), January 14–15, 2012, in Mumbai, India. The aim objective of ICCAE 2012 is to provide a platform for researchers, engineers, academicians as well as industrial professionals from all over the world to present their research results and development activities in Computer and Automation Engineering. This conference provides opportunities for the delegates to exchange new ideas and application experiences face to face, to establish business or research relations and to find global partners for future collaboration.

  • Copyright:
    All rights reserved. Printed in the United States of America. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher. ©  2012  ASME
  • ISBN:
    9780791859940
  • No. of Pages:
    460
  • Order No.:
    859940
Front Matter PUBLIC ACCESS
PDF
  • Session 1 Computer and Automation Engineering

    • Show Sections

      Hide Sections

      Embedded systems are small size and committed to special purpose with real-time computing. They use standalone and may dedicated be part of large general purpose applications. Microprocessors are multipurpose, programmable electronic devices that use in everything from the smallest embedded systems and handheld devices to the largest mainframe computers. Since the embedded systems perform sensitive tasks, the performance of microprocessors highly impact the significance of systems. In this paper, we evaluate and compare multiple processors acquired from well known manufacturers and extract the quality of devices in terms of power consumption, CPU utilization and peripheral performance. Experimental results based on modern benchmark are provided and discussed to give the better idea to users for suitable products for their real-time systems.
    • Show Sections

      Hide Sections

      In this paper the time-delay neural networks (TDNNs) have been implemented in detecting the damage in bridge structure using vibration signature analysis. A simulation study has been carried out for the incomplete measurement data. It has been observed that TDNNs have performed better than traditional neural networks in this application and the arithmetic of the TDNNs is simple.
    • Show Sections

      Hide Sections

      There are various computing models that aim about higher degree of distributed and parallel processing. Computing paradigms like Grid Computing, Cloud Computing, and Distributed Computing are all in pursuit of higher degree of parallelism and scalability. The aim of each model is to realize a higher degree of performance with better efficiency and optimum resource utilization. The focus of the Grid down the years has been to bring about virtualization across platforms. In realizing virtualization over the grid, there is an issue of scalability that is often sacrificed. The paper is an attempt to realize the two. The paper introduces a new concept of incorporating the Event Driven Style of Computing into the Grid within a Virtualized environment.
    • Show Sections

      Hide Sections

      A novel 4- quadrant analog multiplier using floating gate MOS (FGMOS) transistors operating in saturation region are implemented. Floating gate MOSFETs are being utilized in a number of new and existing analog applications. These devices are not only useful for designing memory elements but also we can implement circuit elements. The main advantage in FGMOS is that the drain current is proportional to square of the weighted sum of input signals. By using conventional transistors we obtain only few hundred mill volts range of the supply voltage and when we go for square law devices we obtain up to 50%. So in order to get 100% range of the supply voltage we go for FGMOS. This can be obtained by the control voltage applied at the gate of the FGMOS. This simulation is done with the SPICE tools.
    • Show Sections

      Hide Sections

      Enhanced mobility and robustness are the important key features of 3GPP Long Term Evolution (LTE), so it is more significant for User Equipments (UE) to use reliable and robust algorithms for the cell search procedure. In this paper, a reliable and robust cell search scheme based on statistical resampling technique is presented. This is a powerful statistical signal processing algorithm used for noise reduction and channel parameter estimation. This scheme uses a Monte-Carlo statistical approach and is therefore robust against noisy Rayleigh fading due to multipath propagation, and Doppler spread due to high speed mobility of UEs (about 300kmph). Along with the description of the non-parametric Bootstrap technique, this paper also provides a theoretical insight into conventional approach using coherent detection of Primary Synchronization Signals (PSS) and Secondary Synchronization Signals (SSS). Finally, this paper investigates the performance of the proposed cell search algorithm under Additive White Gaussian Noise (AWGN) and Extended Vehicular-A (EVA-300) channel conditions showing improved probability of detection of about 30% with Signal-to-Noise Ratio (SNR) of -6dB, over conventional cell search algorithms.
    • Show Sections

      Hide Sections

      Using query language for transaction with database is always a professional and complicated problem .This complication causes the users usage of data existing in database limits to use definite reports there are in some pre implemented Softwares .To the complexity and to ease the manipulation of data in database for general people, We can create this chance that each none professional user transfers his questions and requirements to computer in natural language and derives his data by natural language (NL).The system is called Interactive English Natural Language Interface to Generate SQL Query (IENLIDB).It is designed by the use of UML and developed using C#.Net, Sql Server 2008.
    • Show Sections

      Hide Sections

      The paper describes a novel way to provide security for social network sites. The proposed system constructed from sophisticated architecture and methods which presented in Data mining context. The paper illustrates how to capture and analysis relation between participants. In addition, messages have been parsed and the semantic of message's topic has been discovered. An unsupervised anomaly detection algorithm is applied to real life email dataset. Eventually, experimental results disambiguate outlier messages and clarify nodes relations. Combination of social network analyzing and messages anomaly detection produces an efficient way to design and implementing secure social network.
    • Show Sections

      Hide Sections

      In the recent years there has been considerable interest in automatic monitoring of welding and ways to identify various conditions and faults in the system. This paper proposes a novel method for detection of voids in welding joints i.e. using adaptive chirplet analysis. It is shown that chirplet analysis can distinctively identify a fault in a welding joint. The arc weld sound signal sampled from the Pulsed-Gas Metal Arc Welding (P-GMAW) process was chosen for the analysis. Chirplet transform yielded distinctive feature extraction capabilities that provided the exact location of the fault occurrence in a work piece. Thus this paper establishes chirplet transform as an effective tool for better monitoring of a welding process.
    • Show Sections

      Hide Sections

      In this paper, the set point tracking and the regulatory responses over the entire operating range is controlled by PID controller. The performance of boost converter and startup behavior has been analyzed for open loop response using PID controller. The neuro controller is designed using NARMA-L2 control for testing error convergence in plant model. Finally, the performance of the proposed system was evaluated by with and without conventional PID controller.
    • Show Sections

      Hide Sections

      Method for creating an identikit was practically unchanged since its invention. Folding foils with individual parts of a face is well known. However, there is one big problem with this principle: it is very little effective. With the help of a genetic algorithm, faces are created by generations. Face is taken as a whole and not as a sum of individual parts. Witness chooses the best face in every generation, so he doesn't give a description of it. Faces evolve until the witness is satisfied with the results. This way, the whole process is easier for witness and more effective.
  • Session 2 Computer and Automation Engineering

    • Show Sections

      Hide Sections

      In this article, it demonstrates the usage of selected lean metrics for an evaluation of designed manufacturing system by simulation. This manufacturing system was designed for automobile industry specifically for the car door production line. The goal is to eliminate a waste already in the process of the manufacturing system design. The known simulator Witness was used for creating a simulation model of the manufacturing system. The main benefit of this article is the calculation of lean metrics by simulation. The simulation greatly helped to validate designed solution.
    • Show Sections

      Hide Sections

      For investigation of the brain signal, using the appropriate method for feature extraction is essential. During these years different methods have been offered for processing the EEG signals such as frequency domain, time domain and time-frequency domain. The methods used in this study are Wavelet transforms, CSP, Fourier transform, Eigenvector and EMD. The main purpose of this research is to investigate the advantages and disadvantages of these methods and provide extensive comparison between them. This study shows that between these methods EMD is the best method for feature extraction of the EEG signal because it is more adaptive to non-stationary signal.
    • Show Sections

      Hide Sections

      In this paper, we introduce a new time-evolved spectral analysis-SLEX for analyzing the EMG signal. First we had review on four other common ways for feature extraction of EMG signal and last of all we focus on SLEX. Smooth Localized Complex Exponential (SLEX), is a kind of time dependent spectral analysis. Diverse from conventional Fourier method, be appropriated by two particular smooth windows on Fourier basis function and has the capability to be simultaneously orthogonal and localized. In this study we tried to show the application of the FFT, Wavelet Transform, Autoregressive, and PSE in EMG feature extraction. Each of which of these techniques for feature extraction has its own pros and cons which we brought it to the note. This method can overcome the shortcoming of conventional Fourier-based spectral analysis and accurately describe the time-dependent statistical property of EMG signal. Our conclusion table shows the best way for EMG feature extraction which is SLEX-based EMG recognition systems and it represent good performance in classifying 8 wrist motions. The classification accuracy of the 4channels is about 98% superior to the other methods.
    • Show Sections

      Hide Sections

      Temporal sequence processing (TSP) is a research area having applications in diverse fields varying from weather forecasting to time series prediction, speech recognition and remote sensing. TSP is a function approximation task whose goal is to estimate future values of a sequence of observations based on current and past values of the sequence.In this paper we present the trends in satellite imagery using an algorithm Vector Quantized Temporal Associative Memory (VQTAM )[1].We present a dynamic model to detect the temporal changes in satellite imagery using SOFM in particular we model a time series that can be used for forecasting.
    • Show Sections

      Hide Sections

      In this paper, we proposed the wavelet transform and color space object segmentation. Image segmentation for image brightness has a great influence, it is necessary to adjust the brightness of the image, and select the correct color space and definition of success threshold to make a more complete image can be divided. The wavelet transform can not only compress image, but also improve the lighting problem, we use improving variable illumination on wavelet(IVIW) to adjust the brightness, then after several wavelet image decomposition to obtained by the average gray scale image of pixel, making the image brightness normalize and reconstruction. There are several types of color space, due to Ycbcr color space have good response of light, we use the RGB color convert to the Ycbcr color space with extract the luminance to threshold segmentation. Experimental results show that the method effectively improved the quality of image segmentation.
    • Show Sections

      Hide Sections

      Vehicular ad hoc networks (VANETs) have emerged recently as a platform to support intelligent inter-vehicle communication and improve traffic safety and performance. High mobility and power source of the vehicles, and the emergence of roadside wireless infrastructures effect VANETs, thus the networks change frequently and become so complicated. The research in this paper will analyze the statistical characteristic, and provide a theoretical reference to the development of the complex network theory.
    • Show Sections

      Hide Sections

      The emergence of use case analysis method provides an effective way of capturing and analyzing software requirements, but for beginners, how to use it efficiency is always a problem. In paper, it presents some common mistakes that users often make and principles and standards users should comply with. Finally, it proposes a granularity-oriented use case analysis method in detail.
    • Show Sections

      Hide Sections

      This paper presents our analysis of the dynamics of different metrics specific to ERP systems. This quantitative technical approach to ERP systems development and maintenance provides an agile approach context. ERP systems usually contain huge numbers of tables and related classes, with a growing complexity that can easily become difficult to manage. The problem is how to identify the things that are used often, the one not used, and which ones are worthy of investing our time in improving. In order to precisely determine these points from technical perspective, this paper analyzes the data from their dynamics through time and from several ERP specific metrics.
    • Show Sections

      Hide Sections

      In this paper, a rotor-pole shape optimization method to reduce cogging torque in Interior Permanent-Magnet (IPM) motors is developed by using the reduced basis technique. Objective function is defined as the minimum cogging torque. The experimental design of Taguchi method is used to build the approximation model and to perform optimization. This method is demonstrated on the rotor pole shape optimization of a 8-poles/18-slots IPM motor.
    • Show Sections

      Hide Sections

      ESA is a new method of word segmentation presented in our early paper. Because the whole method has only one parameter to be estimated, it is the most unsupervised method for the task to date. More importantly, the parameter can be approximately predicted by the empirical formulae based on SIGHAN Bakeoff-2 dataset. In this article, we further evaluate the empirical formulae on SIGHAN Bakeoff-3 dataset. The evaluation results show that the original formulae provide quite accurate prediction of the parameter. Finally, we correct the formulae according to both datasets for the better prediction.
  • Session 3 Computer and Automation Engineering

    • Show Sections

      Hide Sections

      In order to estimate the effect of geometric error sources and to compensate the geometric error, a systematic approach for error modeling is presented to build a integrated geometric error model. The error sources are categorized and the error model is analyzed to get simplified. A compensation method according to the error model is also proposed. Simulation results prove the validity of the analysis and effectiveness of the compensation method.
    • Show Sections

      Hide Sections

      In this paper, a general expression of optimal skew ratio to minimize pulsating torque and radial force simultaneously in brushless permanent magnet motors at all speed range is derived based on finite element analysis method. It is shown that the optimal skew ratio is proportional to the number of slots per poles per phases and is independent of motor drive, motor type and winding connection.
    • Show Sections

      Hide Sections

      Tracking the moving objects in video and classifying them as human or nonhuman object is an important problem in computer vision. We present real time human detection system utilizing covariance matrices as object descriptors. We describe a fast method for computation of covariance based on integral images. The idea presented here is more general than the image sums or histograms. Covariance matrices do not lie on Euclidean space, therefore we use Logitboost classifier modified for analytic manifold for classification. The algorithm is tested on CAVIAR human database where superior detection rates are observed over the previous approaches.
    • Show Sections

      Hide Sections

      This paper investigates the variations in pressure distribution in the hydrodynamic journal bearing system for varying set of load, speed and various lubricants. First part of paper consists of experimental readings and second part consists of training neural network for practical pressure distribution readings. In experimental section practical pressure distribution developed in journal bearing system is measured on test rig. Test is performed at different set of load and using different lubricants. The recorded experimental data of pressure distributions are employed as training and testing data for an artificial neural network. The type of neural network is a feed forward network. Back propagation algorithm is used to update the weight of the network during the training and to minimize error. We found that prediction of artificial neural network was in close agreement with practical pressure distribution given by test rig.
    • Show Sections

      Hide Sections

      Segmentation of medical images, particularly magnetic resonance images of brain is complex and it is considered as a huge challenge in image processing. Among the numerous algorithms presented in this context, the fuzzy Cmean (FCM) algorithm is widely used in MR images segmentation. Recently, researchers have introduced two new parameters in order to improve the performance of FCM algorithm, which are calculated using neural network in a complex and time consuming manner. These two parameters have been then calculated by other researchers using genetic algorithm (GA) and particle swarm optimization (PSO) algorithm, which although it has reduced the time but no change obtained in the resulted segmentation quality. In this paper we calculate these two parameters using the artificial bee colony (ABC) algorithm aiming to both reduce the time and to reach a higher quality than that obtained by previous reports.
    • Show Sections

      Hide Sections

      Fuzzy based air compressor sequencer is the smart Sequencer for managing compressors in energy efficient, process Optimum method, has lower maintenance cost and experience less wear. It matches the Compressed air supply with system demand. Individual compressor control along with proper sequencing is essential to efficient system operation and high performance. The smart sequencer decides intelligently the loading and unloading of individual air compressor, frequency of Variable speed drive (VSD). It also addresses the problem of maximum number of switch on and off air compressor in unit time for uniform wear.
    • Show Sections

      Hide Sections

      Nowadays, because of the excellent cross-platform and ease of use, the web application system based on B/S structure has been widely used on system architecture design. Because of its cross-platform and supporting to the web application, Java has been the best choice. This article will discuss a loosely coupled architecture focused on the web application based on Java, that is a three-tiered layer architecture including presentation layer, business logic layer and persistence layer .And the solution of integration of Struts2, Spring and Hibernate which is typical open-source framework on different layers respectively will be showed in this article.
    • Show Sections

      Hide Sections

      In this paper we propose an algorithm to create a cluster of mobile computers to form a temporary workgroup for processing and sharing information. This work group is called a Cluster of Peers (COP) and created using neighborhood with a relation concept. Performance analysis shows that COP algorithm for mobile databases gives better performance as compare to DBSCAN and FDBSCAN.
    • Show Sections

      Hide Sections

      Objective of this paper is to discuss cloud computing with respect to data consistency. The paper elaborates through a survey on the cloud architecture and the services provided. Further, this paper gives a comparison of existing cloud servers on the basis of their consistency models and thus proposes an efficient client-centric consistency model called writes-follow-reads consistency. A strong consistency protocol called primary-backup protocol is proposed to support proposed consistency model to enhance the current architecture of existing cloud computing.
    • Show Sections

      Hide Sections

      Scheduling is one of the core steps to efficiently exploit the full capabilities of a multicore machine and is an NP-complete problem. The number of cores on a chip is proliferating now to take advantage of improvement in VLSI technology. Not all applications can effectively utilize all cores on the chip, hence it would be advantageous to multiprogram the processor and spacepartition the cores among applications being multiprogrammed. The objectives then could be to obtain minimal makespan, desired makespan or desired utilization of cores for a given DAG. We have proposed and experimented a binary search based method to find answers for the above mentioned user queries. The method being based on binary search, has O(logn) complexity, which is much better than direct method.
    • Show Sections

      Hide Sections

      Nowadays new interaction forms are not limited by Graphical User Interfaces (GUIs) making Human Computer Interaction (HCI) more natural. The development of humanoid robots for natural interaction is a challenging research topic. By using gesture based humanoid we can operate any system simply by gestures. The inter-human communication is very complex and offers a variety of interaction possibilities. Although speech is often seen as the primary channel of information, psychologists claim that 60% of the information is transferred non-verbally. Besides body pose, mimics and others gestures like pointing or hand waving are commonly used. In this paper the gesture detection and control system of the humanoid robot CHITTI is presented using a predefined dialog situation. The whole information flow from gesture detection till the reaction of the robot is presented in detail.
  • Session 4 Computer and Automation Engineering

    • Show Sections

      Hide Sections

      Advanced Password Encryption Technology Based On The New Concept Of Dynamic Passwords For Users Based On Their Current Location/Time/Parameters. The Basic Password Remains Intact & Users Enter Few 2–3 Additional Information Characters In Addition To Basic Password Which Are Dynamic & Provided At Run Time.
    • Show Sections

      Hide Sections

      Electric Discharge Machining (EDM) is one of the most popular nonconventional machining processes. During Electric Discharge Machining (EDM) the process parameters plays a very important role in deciding the material removal rate (MRR) and electrode wear (EW). In the present investigation mild steel specimens have been machined by using Electric Discharge Machining (EDM). The material removal rate (MRR) and electrode wear (EW), thus, measured have been correlated with the machining parameters such as, spark current, pulse on time and pulse off time by the use of response surface methodology (RSM). The equations, thus, derived have been used to find out the effect of machining parameters on the material removal rate (MRR) and electrode wear (EW) during Electric Discharge Machining (EDM) process.
    • Show Sections

      Hide Sections

      Gait recognition is the process of identifying an individual by the manner in which they walk. In this paper Gait recognition algorithm is based on the histograms of template image of a person. A template image of the walking person is obtained by remove the object from the image instead of person and generate a histogram of each template image. Histogram of template image is compare with the database. The Bhattacharyya coefficient method is used to find the similarity between histogram. Achieving good recognition result.
    • Show Sections

      Hide Sections

      Previously, analysis of transient multiexponential data using a combination of Gardner transform and parametric methods was shown to yield good results. However, one problem that remains unsolved is that of the nonstationarity of the data resulting from the associated deconvolution. Hitherto, trial and error methods have been used to select the qualitative length of the deconvolved data. In this paper, Cramer Rao Lower Bound (CRLB) is used to select the data truncation points for use with the MUSIC (Multiple Signal Classification), minimum norm and ARMA (autoregressive moving average) methods. Several simulations are made based on which truncation points are recommended for each of the three parametric methods.
    • Show Sections

      Hide Sections

      In this paper, a new method is represented which utilizes the chaotic properties of chaos function and image histogram. Considering each pixel gray scale, the image in this method is partitioned into 16 sections. The histogram of each section is used for the determination of those pixels which are nominated for the encryption of code data bits. The chaos function is responsible for the selection of an appropriate order among pixels which are chosen at the previous stage.High embedding capacity, high value obtained for the image PSNR, and resistance of the proposed method against brute-force attack are the characteristics that emphasize high effectiveness of this method.
    • Show Sections

      Hide Sections

      A fuzzy-genetic data-mining algorithm for extracting both association rules and membership functions from quantitative transactions is shown in this paper. It uses a combination of large 1-itemsets and membership-function suitability to evaluate the fitness values of chromosomes. The calculation for large 1-itemsets could take a lot of time, especially when the database to be scanned cannot not be totally fed into main memory. In this system, an enhanced approach, called the cluster-based fuzzy-genetic mining algorithm is used. It divides the chromosomes in a population into clusters by the k-means clustering approach and evaluates each individual according to both cluster and their own information.
    • Show Sections

      Hide Sections

      The basic aim of the paper is to discuss the approaches used to build an autonomous Unmanned Ground Vehicle capable of lane detection and following, while avoiding obstacles in its path, and reaching specified GPS coordinates without any obstruction, autonomously. We present innovative approaches in the design of both Hardware and Software systems, which can together contribute towards an effective unmanned vehicular system. In the Hardware Design we discuss the different sensors used, a unique split frame technology for the chassis, design of electrical systems, and the safety considerations. The software design consists of the autonomous navigation algorithm; image processing methods, obstacle detection and avoidance, and GPS based waypoint navigation.
    • Show Sections

      Hide Sections

      The use of artificial neural networks to solve the problem that normal algorithms cannot is increasing day by day. The neural network is still having some major drawbacks, like it needs lots of sample data so the self learning algorithms can be utilized efficiently. We can reduce the complexity and can increase the accuracy and preciseness in the result, by using concepts of neural network only up to a certain level and then making use of a new concept of elimination and survival of fittest. This is implemented by asking a variety of test questions which contains answers in boolean form to get the result which is more specific, reliable and easy to handle.
    • Show Sections

      Hide Sections

      Identification of major histocompatibility complex (MHC) class-II restricted peptides is an important goal in human immunological research. These peptides are predominantly recognized by CD4+ T-helper cells, which when turned on, have profound immune regulatory effects. Thus, classification of such MHC class-II binding peptide is very helpful towards epitope based vaccine designing. It is important for the treatment of autoimmune diseases to determine which peptides bind to MHC class II molecules. The experimental methods for identification of these peptides are both time consuming and cost intensive. Therefore, computational methods have been found useful in classifying these peptides as binders or non-binders. The classification using learning requires the sufficient amount of data for training. Limited number of known MHC class — II binders and non-binders is not sufficient for training. Here, we have studied negative selection algorithm, an artificial immune system approach to classify limited number of HLA-DRB1*0401 9-mer binders and 9-mer non-binders. For the evaluation of the algorithm, five fold cross validation has been used. The area under ROC curve was found to be 0.788, 0.762, 0.749, 0.773, and 0.755 indicating good predictive performance as in most of the cases it is nearly equal to 0.8.
    • Show Sections

      Hide Sections

      In this paper, we develop a new approach for packing problems of 2D irregular items onto an object with a fixed width and unlimited length. We used a two-stage packing approach based on grid approximation with an integer representation based genetic algorithm (GA) to find the optimum sequence of the items to be used in bottom-left (BL) strategy. In the experiments section, the effectiveness of the proposed methodology is validated by the experiments in the apparel industry, and the results demonstrate that the proposed method outperforms the commonly used bottom-left placement strategy in combination with random search (RS).
    • Show Sections

      Hide Sections

      The main objective of any cutting stock optimization is to meet the demand (orders) and to find the cutting plan with the lowest trim loss at a given time. When the cutting stock optimization is viewed as continuous activity, which involves more than one order, the trim loss can be further reduced by returning specific leftovers back to stock and reusing them in subsequent orders. These problems are known as cutting stock problems with usable leftovers (CSPUL) and are found in various industries, inter alia in the field of nanotechnology where matter on atomic, molecular and cellular scale is manipulated. We distinguish two types of leftovers. If their size is lower than an arbitrarily set length T, they are treated as trim loss and are discarded. Leftovers above T are returned to stock. Following this definition, a heuristic method is proposed, which adjusts each specific cutting plan so as to generate more leftovers longer than T and consequently less trim loss. This method is tested by making cutting plans for two orders. The results are presented and they are close to 50% better if leftovers are intentionally generated.
  • Session 5 Communication and Electronics Information

    • Show Sections

      Hide Sections

      Natural Language Processing is an upcoming field in the area of text mining. As text is an unstructured source of information, to make it a suitable input to an automatic method of information extraction it is usually transformed into a structured format. Part of Speech Tagging is one of the preprocessing steps which perform semantic analysis by assigning one of the parts of speech to the given word. In this paper we had discussed various models of supervised and unsupervised technique, experimentally compared the results obtained in models of Supervised CRF and Supervised Maximum Entropy Model. We had also discussed the problem occurring with supervised part of speech tagging.
    • Show Sections

      Hide Sections

      Vertical handoff decision making is one of the important research issues in the integration of wireless networks. This paper presents an optimal handoff decision method that considers parameters such as available bandwidth, End-to-end delay, Jitter, Packet Dropping Rate and cost of network into account. The algorithm defines one-dimensional parameterized member-ship functions for each parameter and finds the network evaluation function using Simple Additive Weighting technique. As the membership functions involve simple formulae and less complexity, the algorithm is suitable for multi-mode mobile terminals which have light-weight computational capability.
    • Show Sections

      Hide Sections

      The National Institute of Standards and Technology [1] lists the importance of preservation of file time stamps for forensic and intrusion detection purposes. Most operating systems keep track of certain timestamps related to files, the most commonly used timestamps being modification, access, and creation (M-A-C) times, which does not guarantee to be accurate from a forensic perspective. Moreover, UNIX based Operating systems retain the last modification, last inode change, and last access times. This relates to the fact that operating systems only have the most recently updated file timestamp information, which along with any inaccuracies does not guarantee a successful recreation of timeline of events, for an effective incident response. This paper proposes a novel approach in terms of augmenting the core of pathname lookup operation in the LINUX kernel, towards accurate and authentic preservation of file time stamps of system wide critical files.
    • Show Sections

      Hide Sections

      The innovations in computer science have made it possible to acquire and store enormous amounts of data digitally in databases, currently giga or terabytes in a single database and even more in the future. In today's generation we also want all the data in mobile which can be access in secure manner. In this paper, we discuss and analyze the consumptive behaviour based on object pool with RMS capabilities. We discuss and analyze different aspects of RMS mining techniques and their behaviour in mobile devices. We also analyze the better method or rule of implementing services which is more suitable for mobile devices. The method this paper mentioned has benefit to analyze large numbers of data in consumptive behaviours and provides some instructions to improve better marketing in concerned fields. We also discuss the need of Object Pool in mobile devices to enhance the capability of mobile devices. Object pool model based on RMS is proposed. Aimed to solve the Memory peak problem in J2ME, on the basis of object pool design pattern, a secure object pool model used RMS is designed and implemented.
    • Show Sections

      Hide Sections

      The idea that, through a telepathic effect or sympathetic vibration, an event or act can lead to similar events or acts in the future or an idea conceived in one mind can then arise in another this phenomenon is known as morphic resonance. This paper tries to establish a relation between the electronics and telecommunication with the forensic brain mapping technology. Since encoding and decoding system of the brain is established in the below derived formula hence, this paper aims at resolving the brain into the different operational frequency range for studying each person's brain encoding and decoding in detail, and electronically resolving of the brain lobes into the different electronic circuitary according to the specified brain wave characteristics. Hence, unleashing why morphic resonance occurs in all living beings brain systems.Frequency dependence of brain mapping on gravity which is the typical behaviour of the brain response. Encoding of the day-to-day activities depends on the encoding of the brain its channalisation, regulation, and its response. The brain responds to a typical and point-to-point frequency behaviour, mapped with the gravity at that point, at that place, and at that particular time.
    • Show Sections

      Hide Sections

      This paper presents the encryption/decryption technique where dynamism is exploited. The method of encryption is simple enough yet powerful enough to fit the needs of students and staff in a small institution. The application uses random number generator for generating dynamic keys. The final encryption is performed through the rotation of each of the bits by the random number generated by the random number generator. The result after rotation and the random keys are mapped to a graph containing a curve (sine, cosine, etc.) and their displacement from the curve are determined. The displacements are rotated in opposite direction according to the same random keys. The rotated displacements form the encrypted text.
    • Show Sections

      Hide Sections

      Real time data acquisition and logging system allows real time local and remote data acquisition and logging involve in industrial processes. The application runs in a PC with a web server. The interaction with the physical quantity to be measure is done through specific hardware or PC standard interfaces such as the serial port or USB, or by LAN. Terminal mainly includes web database and ADC that acquire signal from transducers and generate its binary equivalent at output lines which interfaced to microcontroller and transmitted to PC and the measured waveforms are shown on LabVIEW, also log this data in standard spreadsheet and upload that spreadsheet over an IP address and will be used for remote monitor or control applications. The test results show that the system meets the accuracy, real-time and reliability of automatic maintenance equipment. And it implements effectively the remote monitoring. The system is designed to be low cost and flexible with the increasing variety data to be acquired.
    • Show Sections

      Hide Sections

      Concept lattice is a new mathematical tool for data analysis and knowledge processing. There is an increasing interest on application of concept lattices in the different information systems. The concept lattice may be used for representation of the concept generalization structure generated from the underlying data set. Attribute reduction is very important in the theory of concept lattice because it can make the discovery of implicit knowledge in data easier and the representation simpler. The paper presents a modified lattice building algorithm where the generated concept nodes may contain not only the attributes of the children nodes but some other generalized attributes, too. The generalization structure of the attributes is called attribute lattice. In this approach we find concept lattice, Attribute Concept lattice and, Formal Concept Lattice based on all the deduction we find the Heuristic Result which provide better attribute reduction.
    • Show Sections

      Hide Sections

      This paper investigates transmit beamforming for MIMO relaying systems. Assuming perfect channel state information (CSI) at the destination, we consider the delay on the feedback channel. By channel mean feedback, the transmitters obtain a prediction about CSI. Then beamforming is carried out by mapping the signals to the dominant right singular vectors of the source-relay and relaydestination channels. Simulation results show that the proposed scheme achieves a good performance with delayed CSI.
    • Show Sections

      Hide Sections

      Spectrum sensing is a crucial technology in Cognitive Radio (CR) system. The performance of spectrum detection can be improved by utilizing spatial diversity of multiple antennas or cooperative detection of multiple Secondary Users (SUs). Without the prior information provided by Primary Users (PUs), the combining signal means of multiple antennas in the frequency domain is analyzed in the paper. The simulation results show that the performance of multi-antenna spectrum detection is improved through the Maximal-Ratio Combining of spectrum correlation function.
  • Session 6 Communication and Electronics Information

    • Show Sections

      Hide Sections

      This paper studies key technologies involved in military scenario edit and plan view display system, and develops a model-based military scenario edit system with plan view display component, which achieves unified management of models, visual scenario edit and 2D plan view display of military situation. Military scenario specification is based on MSDL and C-BML, to meet the actual needs, both of those are extended. Plan view display component is based on distributed geographic information system, which achieves functions of sharing and unified management of geographic data.
    • Show Sections

      Hide Sections

      MIMO-OFDM technology has attracted attention in wireless communications, since it offers significant increases in data throughput and link range without additional bandwidth or transmit power. It achieves this by higher spectral efficiency (more bits per second per Hertz of bandwidth) and link reliability or diversity (reduced fading). In this paper, we first focus upon MIMO and OFDM technologies, and then are going to analyze performance of an MIMO-OFDM system.
    • Show Sections

      Hide Sections

      In Sample Indexing Modulation (SIM), a message signal is sampled, and each set of seven samples is considered as a frame. Each sample in a frame has a sample number which is 1 to 7(3bit ). A frame is stored in a RAM giving the sample magnitude as the address and the sample number in the data field. Each frame starts with a flag. Time difference between arrival of the flag and the sample at the receiver determines the amplitude of that particular sample. Thus instead of 8 bits (as in PCM) only 4 bits are required for transmitting a sample, and a flag per frame. This reduces the bandwidth requirement. Reduced bandwidth requirement can allow service providers to serve the users with advanced services.
    • Show Sections

      Hide Sections

      This paper describes about the design methodology for reducing router power consumption with the aid of RTL clock gating technique. It causes inactive clocked elements to have clock gating logic (automatically by using cadence tool) which reduces power consumption on those elements to zero when the values stored by those elements are not changing. This technique allows a variety of features such as easily configurable, automatically implemented clock gating which allows maximal reduction in power requirements with minimal designer involvement and software involvement. In this paper, source code was written in Verilog (Hardware Descriptive language) and it was synthesized in Xilinx 9.1i version, simulated in Modelsim 6.6 version and clock gating was applied by using Cadence.
    • Show Sections

      Hide Sections

      In this paper, an analysis of priority queuing and scheduling algorithms is presented. The algorithms utilized in the analysis include First in First out, Priority Queuing, and Weighted Fair Queuing. The analysis network includes a video server and client, an FTP server and client, a VoIP system and a HTTP server and client. The traffic pattern and the packet sizes in the various applications were evenly distributed and based on Poisson's Distribution Function. The Markov model implemented utilized the packet arrival rate and the service time as variables and calculated the average waiting time and the length of the queues. The traffic from this network was routed across a link between two routers which were configured to support the applications based on the queuing algorithms. The end to end delay and the packet drop characteristics of each of these queuing models was compared for the simulated network.
    • Show Sections

      Hide Sections

      Small scale fading causes severe fluctuations in the received signal strength. So, accurate channel estimation and equalization of the received signal is one of the most challenging problems of wireless communication. In this work, the performance of 64-QAM under fading condition is analyzed and efficient adaptive equalization technique is implemented using LMS algorithm. The Bit Error Rate (BER) of 64-QAM is determined after passing it through flat Rayleigh fading channel. Unlike conventional channel estimation techniques, such as PSAM, gradient approach is found to be more effective when input symbol constantly changes. By Law of large numbers we have considered that impulse response of the channel has almost unity amplitude and concentrated on estimating the phase of the channel coefficient. Here, the incoming symbols to the channel are constantly changing causing the behavior of the channel to vary according to the data. In gradient approach we estimate the channel using incoming symbols alone. The simulated results are found to be in good agreement with the theoretically determined results.
    • Show Sections

      Hide Sections

      This paper presents a 3.1–5.2 GHz ultra-wideband amplifier with a band—pass filter in the front of shunt feedback cascode topology fabricated in Cadence RF 0.35um CMOS process. More efforts are made on the wideband circuit and the choice of device in the high frequency. The analysis of the three important parameters: noise figures, power gain, bandwidth is also proposed in this paper. The measured results fit with the design results well. Compared with current studies, this paper distinguished itself by presenting the material parameter configuration and circuit simulation. Those conduce to the analysis and the design the Ultra-wideband amplifier.
    • Show Sections

      Hide Sections

      Metallic objects, placed in the vicinity of an Antenna, often challenge antenna's radiation performance. In this paper, the proximity effect of such metallic objects lying in close to PCB thus affecting the Antenna's Impedance, Directive gain, radiation efficiency, feed matching and polarization purity was studied. The study has been carried out in two ways. Initially, a 1 mm thick Metallic plate was placed beneath the antenna and subsequently, a cubical metallic object was mounted in the same plane of the antenna. The main parameters varied are the metallic plate size, distance from the antenna, metallic objects size and corresponding distance.. The Paper approximately helps in the optimization of the distance of the metallic objects from the antenna depending upon antenna's achievable radiation characteristics.
    • Show Sections

      Hide Sections

      A resource allocation algorithm for variable bit rate (VBR) sources using the Dynamic Reservation TDMA (DR-TDMA) medium access control protocol is proposed in this paper. DR-TDMA has the advantages of distributed access and centralized scheduling to multiplex VBR traffic efficiently over a wireless ATM channel, and can be easily extended to integrate constant bit rate, VBR, and available bit rate traffic. We present a scheduling algorithm which maintains in the base station a virtual status of the access queue in each mobile to control channel access by the mobile. A unique cell control algorithm is implemented to provide guaranteed or best effort service to each VBR source depending on whether the current cells conform to the traffic contract. Simulation results show that DR-TDMA can achieve high throughput in the range of 92 to 97% while maintaining a reasonable quality of service.
  • Session 7 Communication and Electronics Information

    • Show Sections

      Hide Sections

      MIMO wireless technology has raise over the past few years as one engineering break through that has produced a keen interest in both academic and industry researches in communication. It provides high quality and efficient broadband services over wireless and mobile units. By exploiting diversity techniques, the information capacity of communication system can be significantly increased. Research has recently been focused on a set of codes that introduce special and temporal correlation into signal transmitted from multiple antennas to provide diversity and coding gain at the receiver. Such codes are called space — time codes. The paper represents space-time codes for MIMO wireless communication. Space-time codes employ redundancy to minimize the effect of fading, noise and interference. The computer simulation for Alamouti spacetime block code for two transmitters and one receiving antenna is shown.
    • Show Sections

      Hide Sections

      Floating-point division is generally regarded as a low frequency, high latency operation in typical floating-point applications. So due to this not much development had taken place in this field. But nowadays floating point divider has become indispensable and increasingly important in many modern applications. Most of the previous implementation required much larger area and latencies. In this paper we implement divisor for both single precision and double precision floating point numbers using FPGA.
    • Show Sections

      Hide Sections

      In this paper, we present a scheme which can be used to share a secret image with at least 256 colors into two meaningful cover images. We utilize random number generator and XOR operation to make cover images seem innocent and do not show any clue about the secret image block. Because of the usage of the random number and secret key, even if one share has been stolen by unauthorized persons, it is hard to decrypt to reveal the secret image.
    • Show Sections

      Hide Sections

      In multimedia based learning, video is widely used in many course domains. Recently power point presentation is widely used in the educational institutes. We address the problem of the prerecorded lecture video organization of visual information through user's interaction at different steps. In current situation it is very difficult to browse the recorded video, since no information about the contents of the video are provided. NPTEL- Joint venture of IIT's and IIS's main objective is to enhance the quality of engineering education in the country by developing curriculum based video and web courses. This is being carried out by seven IITs and IISc Bangalore as a collaborative project. But videos present cannot be used effectively, because of its poor browsing facility.Video segmentation is done in 3 steps as follows. First get the pre recorded lecture video as input. The lecture video is broken into Frames. Then the important frames (slides) are selected for indexing using the contents of the frames and index points are retrieved. Finally Original prerecorded lecture video start from particular retrieved index point.
    • Show Sections

      Hide Sections

      Initial requirement defect identification and their mitigation is the key aspect in planning the development of the system. In this paper we are trying to identify potential requirement defect through Inspection Technique and setting defect severity as well their priority to mitigate. Here, we propose a Framework for Reliable Requirement Specification (RRS) consisting three major component 1) Input component in the form of Initial Requirement 2) Free Wheel Processing Assembly as the combination of Defect Identification Technique, Requirement Defect, Severity & priority and Defect Mitigation 3) Output component in the form of Reliable Requirement Specification. The importance of this research is to construct initial requirement defect free and also make it capable to deliver the concrete, high quality and reliable requirement for the further phases of the Software Development.
    • Show Sections

      Hide Sections

      IR Sensor Based Fiber Communication System For Sensors/Control Using Advanced TDM & New Source Coding Technique To Efficiently Utilize Time & Amplitude Scale To Send Coded Messages For Sensor Networks. The Application Circuit Used Is Based On Optical Transmission & Can Be Operated With High Speed Fiber Connection or Properly Aligned IR Wireless Link.
    • Show Sections

      Hide Sections

      This paper discusses about the use of metamaterial substrate in patch antennas and its design using IE3D software. The metamaterial substrate has a potential to reduce the circuit size of the antenna by a factor of 2 as well as maintains the amplitude ofthe return loss at the specific resonant frequency and has improved size, bandwidth, radiation directivity and return loss. Metamaterials can be constructed from a combination of two materials- the Flame Retardant 4 (FR-4) and copper metal in a symmetrical-ring structures or RT5880 and Perfect Electric Conductor in omega structures. The S-parameters from the computer simulation technology (CST) are proven the negative permittivity. Nicholson-Ross-Weir (NRW) technique is used as a conversion approach. The patch antenna using metamaterial substrate can be designed in IE3D software and the various characteristics of this antenna are studied and compared with that ofthe conventional antenna.

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In