Você está na página 1de 13

Seminar Topics

1.

Detection of signals by information theoretic criteria-Oye

Abstract- A new approach is presented to the problem of detecting the number of signals in a multichannel time-series, based on the application of the information theoretic criteria for model selection introduced by Akaike (AIC) and by Schwartz and Rissanen (MDL). Unlike the conventional hypothesis testing based approach, the new approach does not require any subjective threshold settings; the number of signals is obtained merely by minimizing the AIC or the MDL criteria. Simula- tion results that illustrate the performance of the new method for the detection of the number of signals received by a sensor array are presented. 2 Step Construction of Visual Cryptography Schemes-FWx

AbstractTwo common drawbacks of the visual cryptography scheme (VCS) are the large pixel expansion of each share image and the small contrast of the recovered secret image. In this paper, we propose a step construction to construct VCS and VCS for general access structure by applying (2,2)-VCS recursively, where a participant may receive multiple share images. The proposed step construction generates VCS and VCS which have optimal pixel expansion and contrast for each qualied set in the general access structure in most cases. Our scheme applies a technique to simplify the access structure, which can reduce the average pixel expansion (APE) in most cases compared with many of the results in the literature. Finally, we give some experimental results and comparisons to show the effectiveness of the proposed scheme. Index Terms Secret sharing, step construction, visual cryptography.

3 Suppression of acoustic noise in speech using spectral subtractionYNI Abstract-A stand-alone noise suppression algorithm is presented for reducing the spectral effects of acoustically added noise in speech. Ef fective performance of digital speech processors operating in practical environments may require suppression of noise from the digital waveform. Spectral subtraction offers a computationally efficient, processor independent
ASPYRA INFOTECH, NOORANAD P.O, ALAPPUZHA, KERALA, 04792387724, aspyrainfotech@gmail.com Page 1

Seminar Topics approach to effective digital speech analysis. The method, requiring about the same computation as high-speed convolution, suppresses stationary noise from speech by subtracting the spectral noise bias calculated during non speech activity. Secondary procedures are then applied to attenuate the residual noise left after subtraction. Since the algorithm re synthesizes a speech waveform, it can be used as a pre processor to narrow-bandvoice communications systems, speech recognition systems, or speaker authentication systems.

4. Digital Image Forensics via Intrinsic Fingerprints-VA8 AbstractDigital imaging has experienced tremendous growthin recent decades, and digital camera images have been used in agrowing number of applications. With such increasing popularity and the availability of low-cost image editing software, the integrity of digital image content can no longer be taken for granted. Thispaper introduces a new methodology for the forensic analysis ofdigital camera images. The proposed method is based on the ob-servation thatmany processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on digital images,and these intrinsic ngerprints can be identied and employed to verify the integrity of digital data. The intrinsic ngerprints of the various in-camera processing operations can be estimated through a detailed imaging model and its component analysis. Further processing applied to the camera captured image is modelled as a ma- nipulation lter, for which a blind deconvolution technique is applied to obtain a linear time-invariant approximation and to esti-mate the intrinsic ngerprints associated with these postcamera operations. The absence of camera-imposed ngerprints from a test image indicates that the test image is not a camera output and is possibly generated by other image production processes. Any change or inconsistencies among the estimated camera-imposed nger prints, or the presence of new types of ngerprints suggest that the image has undergone some kind of processing after the initial capture, such as tampering or steganographic embedding. Through analysis and extensive experimental studies, this paper demonstrates the effectiveness of the proposed framework for non-intrusive digital image forensics.

ASPYRA INFOTECH, NOORANAD P.O, ALAPPUZHA, KERALA, 04792387724, aspyrainfotech@gmail.com Page 2

Seminar Topics Index TermsComponent forensics, image-acquisition forensics, intrinsic ngerprints, nonintrusive image forensics, steganalysis, tampering detection.

5. Maximum likelihood localization of multiple sources by alternating projection-n6M

Abstract-We present a novel and efficient algorithm for computing the exact maximum likelihood estimator of the locations of multiple sources in passive sensor arrays. The estimator is equally well applicable to the case of coherent signals appearing, for example, in multipath propagation problems, and to the case of a single snapshot. Simulation results that demonstrate the performance of the algorithm are included.

6. A Feasible Solution to the Beam-Angle-Optimization Problem in Radiotherapy Planning With a DNA-Based Genetic Algorithm Yongjie Li and Jie Lei

AbstractIntensity-modulated radiotherapy (IMRT) is now becoming a powerful clinical technique to improve the therapeutic radio for cancer treatment. It has been demonstrated that selection of suitable beam angles is quite valuable for most of the treatment plans, especially for the complicated tumor cases and when limited number of beams is used. However, beam-angle optimization (BAO) remains a challenging inverse problem mainly due to the huge computation time. This paper introduced a DNA genetic algorithm (DNAGA) to solve the BAO problem aiming to improve the optimization efciency. A feasible mapping was constructed between the universal DNA-GA algorithm and the specied engineering problem of BAO. Specically, a triplet code was used to represent a beam angle, and the angles of several beams in a plan composed a DNA individual. A bit-mutation strategy was designed to set different segments in DNA individuals with different mutation probabilities; and also, the dynamic probability of structure mutation operations was designed to further improve the
ASPYRA INFOTECH, NOORANAD P.O, ALAPPUZHA, KERALA, 04792387724, aspyrainfotech@gmail.com Page 3

Seminar Topics evolutionary process. The results on simulated and clinical cases showed that DNA-GA is feasible and effective for the BAO problem in IMRT planning, and to some extent, is faster to obtain the optimized results than GA. Index TermsBeam-angle optimization (BAO), DNA computation, genetic algorithm (GA), intensity-modulated radiotherapy (IMRT).

6.

Crypto-Biometric Verication Protocol

Abstract Concerns on widespread use of biometric authentication systems are primarily centered around template security, revocability and privacy. The use of cryptographic primitives to bolster the authentication process can alleviate some of these concerns as shown by biometric cryptosystems. In this paper, we propose a provably secure and blind biometric authentication protocol, which addresses the concerns of users privacy, template protection, and trust issues. The protocol is blind in the sense that it reveals only the identity, and no additional information about the user or the biometric to the authenticating server or vice-versa. As the protocol is based on asymmetric encryption of the biometric data, it captures the advantages of biometric authentication as well as the security of public key cryptography. The authentication protocol can run over public networks and provide non-repudiable identity verication. The encryption also provides template protection, the ability to revoke enrolled templates, and alleviates the concerns on privacy in widespread use of biometrics. The proposed approach makes no restrictive assumptions on the biometric data and is hence applicable to multiple biometrics. Such a protocol has signicant advantages over existing biometric cryptosystems, which use a biometric to secure a secret key, which in turn is used for authentication. We analyze the security of the protocol under various attack scenarios. Experimental results on four biometric datasets (face, iris, hand geometry and ngerprint) show that carrying out the authentication in the
ASPYRA INFOTECH, NOORANAD P.O, ALAPPUZHA, KERALA, 04792387724, aspyrainfotech@gmail.com Page 4

Seminar Topics encrypted domain does not affect the accuracy, while the encryption key acts as an additional layer of security. Index Terms Biometrics, Privacy, Security, Cryptosystems, Support Vector Machines, Articial Neural Networks, Public Key Cryptography. 1 Copyright (c) 2008 IEEE. Personal use of this material is permitted. However, permission to use this material for any other

7. Blood Glucose Prediction Using Stochastic Modeling in Neonatal Intensive Care

Aaron J. Le Compte , Dominic S. Lee, J. Geoffrey Chase, Jessica Lin, Adrienne Lynn, and Geoffrey M. Shaw AbstractHyperglycemia is a common metabolic problem inpremature, lowbirth-weight infants. Blood glucose homeostasis inthis group is often disturbed by immaturity of endogenous regulatory systems and the stress of their condition in intensive care. A dynamicmodel capturing the fundamental dynamics of the glucose regulatory system provides a measure of insulin sensitivity (SI ). Forecasting the most probable future SI can signicantly enhance real-time glucose control by providing a clinically validated/proven level of condence on the outcome of an intervention, and thus, increased safety against hypoglycemia. A 2-D kernel model of SI is tted to 3567 h of identied, time-varying SI from retrospective clinical data of 25 neonatal patients with birth
ASPYRA INFOTECH, NOORANAD P.O, ALAPPUZHA, KERALA, 04792387724, aspyrainfotech@gmail.com Page 5

Seminar Topics gestational age23 to 28.9 weeks. Conditional probability estimates are used to determine SI probability intervals. A lag-2 stochastic model and adjustments of the variance estimator are used to explore the bias variance tradeoff in the hour-to-hour variation of SI . The model captured 62.6% and 93.4% of in-sample SI predictions within the (25th75th) and (5th95th) probability forecast intervals. This over conservative result is also present on the cross-validation cohorts and in the lag-2 model. Adjustments to the variance estimator found a reduction to 10%50% of the original value provided optimal coverage with 54.7% and 90.9% in the (25th75th) and (5th95th) intervals. A stochastic model of SI provided conservative forecasts, which can add a layer of safety to real-time control. Adjusting the variance estimator provides amore accurate, cohort specic stochastic model of SI dynamics in the neonate. Index TermsForecasting, human factors, stochastic approximation.

8. Coevolution of Role-Based Cooperation in Multiagent Systems Chern Han Yong and Risto Miikkulainen AbstractIn tasks such as pursuit and evasion, multiple agents need to coordinate their behavior to achieve a common goal. An interesting question is, how can such behavior be best evolved? A powerful approach is to control the agents with neural networks, coevolve them in separate subpopulations, and test them together in the common task. In this paper, such a method, called Multia
ASPYRA INFOTECH, NOORANAD P.O, ALAPPUZHA, KERALA, 04792387724, aspyrainfotech@gmail.com Page 6

Seminar Topics gent Enforced SubPopulations (Multiagent ESP), is proposed and demonstrated in a prey-capture task. First, the approach is shown to be more efcient than evolving a single central controller for allagents. Second, cooperation is found to be most efcient through stigmergy, i.e., through role-based responses to the environment,rather than communication between the agents. Together these results suggest that role-based cooperation is an effective strategy in certain multiagent tasks. Index TermsCoevolution, communication, cooperation, heterogeneous teams, multiagent systems, neuroevolution, prey-capture task, stigmergy.

9. Computational Developmental Neuroscience: Capturing Developmental TrajectoriesFrom Genes to Cognition Jean-Philippe Thivierge AbstractOver the course of development, the central nervous system grows into a complex set of structures that ultimately controls our experiences and interactions with the world. To understand brain development, researchers must disentangle the contributions of genes, neural activity, synaptic plasticity, and intrinsic noise in guiding the growth of axons between brain regions. Here, we examine how computer simulations can shed light on neural development, making headway towards systems that self-organize into fully autonomous models of the brain. We argue that these simulations should focus on the openended nature of development, rather than a set of deterministic outcomes. Index TermsAxonal growth, computational model, development, intrinsic activity, synaptic plasticity.

ASPYRA INFOTECH, NOORANAD P.O, ALAPPUZHA, KERALA, 04792387724, aspyrainfotech@gmail.com Page 7

Seminar Topics 10. Multiclass Real-Time Intent Recognition of a Powered Lower Limb Prosthesis Huseyin Atakan Varol, Member, IEEE, Frank Sup, Member, IEEE, and Michael Goldfarb , Member, IEEE AbstractThis paper describes a control architecture and intent recognition approach for the real-time supervisory control of a powered lower limb prosthesis. The approach infers user intent to stand, sit, or walk, by recognizing patterns in prosthesis sensor data in real time, without the need for instrumentation of the sound-side leg. Specically, the intent recognizer utilizes time based features extracted from frames of prosthesis signals, which are subsequently reduced to a lower dimensionality (for computational efciency). These data are initially used to train intent models, which classify the patterns as standing, sitting, or walking. The trained models are subsequently used to infer the users intent in real time. In addition to describing the generalized control approach, this paper describes the implementation of this approach on a single unilateral transfemoral amputee subject and demonstrates via experiments the effectiveness of the approach. In the real-time supervisory control experiments, the intent recognizer identied all 90 activitymode transitions, switching the underlying middle-level controllers without any perceivable delay by the user. The intent recognizer also identied six activitymode transitions, which were not intended by the user. Due to the intentional overlapping functionality of themiddle-level controllers, the incorrect classications neither caused problems in functionality, nor were perceived by the user. Index TermsPattern recognition, physical humanrobot interaction, powered prosthesis, rehabilitation robotics.

ASPYRA INFOTECH, NOORANAD P.O, ALAPPUZHA, KERALA, 04792387724, aspyrainfotech@gmail.com Page 8

Seminar Topics

11. Neural Decoding of Finger Movements Using Skellam-Based Maximum-Likelihood Decoding Hyun-Chool Shin, Member, IEEE, Vikram Aggarwal, Student Member, IEEE, Soumyadipta Acharya, Student Member, IEEE, Marc H. Schieber, and Nitish V. Thakor Fellow, IEEE

AbstractWe present an optimal method for decoding the activity of primary motor cortex (M1) neurons in a nonhuman primate during single nger movements. The method is based on the maximum likelihood (ML) inference, which assuming the probability of nger movements is uniform, is equivalent to the maximum a posteriori (MAP) inference. Each neurons activation is rst quantied by the change in ring rate before and after nger movement. We then estimate the probability density function of this activation given nger movement, i.e., Pr(neuronal activation (x)| nger movements (m)). Based on the ML criterion, we choose nger movements to maximize Pr(x|m). Experimentally, data were collected from 115 task-related neurons in M1 as the monkey performed exion and extension of each nger and the wrist (12 movements). With as few as 2025 randomly selected neurons, the proposed method decoded single-nger movements with 99%accuracy. Since the training and decoding procedures in the proposed method are simple and computationally efcient, the method can be extended for real-time neuroprosthetic control of a dexterous hand.
ASPYRA INFOTECH, NOORANAD P.O, ALAPPUZHA, KERALA, 04792387724, aspyrainfotech@gmail.com Page 9

Seminar Topics Index TermsFinger movements, maximum likelihood, neural decoding, neural prosthetics, Skellam.

12. Principal Component Analysis as a Tool for Analyzing Beat-to-Beat Changes in ECG Features: Application to ECG-Derived Respiration Philip Langley , Emma J. Bowers, and Alan Murray

AbstractAn algorithmfor analyzing changes in ECGmorphology based on principal component analysis (PCA) is presented and applied to the derivation of surrogate respiratory signals from single-lead ECGs. The respiratory-induced variability of ECG features, P waves, QRS complexes, and T waves are described by the PCA. We assessed which ECG features and which principal components yielded the best surrogate for the respiratory signal. Twenty
ASPYRA INFOTECH, NOORANAD P.O, ALAPPUZHA, KERALA, 04792387724, aspyrainfotech@gmail.com Page 10

Seminar Topics subjects performed controlled breathing for 180 s at 4, 6, 8, 10,12, and 14 breaths per minute and normal breathing. ECG and breathing signals were recorded. Respiration was derived from the ECG by three algorithms: the PCAbased algorithm and two established algorithms, based on RR intervals and QRS amplitudes.ECG-derived respiration was compared to the recorded breathing signal by magnitude squared coherence and cross-correlation. The top ranking algorithm for both coherence and correlation was the PCA algorithm applied to QRS complexes. Coherence and correlation were signicantly larger for this algorithm than the RR algorithm (p< 0.05 and p< 0.0001, respectively) but were not signicantly different fromthe amplitude algorithm. PCAprovides a novel algorithm for analysis of both respiratory and nonrespiratory related beatto-beat changes in different ECG features.

Index TermsECG-derived respiration (EDR), principal component analysis (PCA).

13. Reproducing Interaction Contingency Toward Open-Ended Development of Social Actions:Case Study on Joint Attention Hidenobu Sumioka, Yuichiro Yoshikawa, and Minoru Asada, Fellow, IEEE
ASPYRA INFOTECH, NOORANAD P.O, ALAPPUZHA, KERALA, 04792387724, aspyrainfotech@gmail.com Page 11

Seminar Topics AbstractHow can human infants gradually socialize through interaction with their caregivers? This paper presents a learning mechanism that incrementally acquires social actions by nding and reproducing the contingency in interaction with a caregiver. A contingency measure based on transfer entropy is used to select the appropriate pairs of variables to be associated to acquire social actions from the set of all possible pairs. Joint attention behaviour is tested to examine the development of social actions caused by responding to changes in caregiver behavior due to reproducing the found contingency. The results of computer simulations of humanrobot interaction indicate that a robot acquires a series of actions related to joint attention such as gaze following and alternation in an order that almost matches the infant development of joint attention found in developmental psychology. The difference in the order between them is discussed based on the analysis of robot behavior, and then future issues are given. Index TermsContingency chain, joint attention, sequential acquisition of social behavior, transfer entropy. 14. Step Construction of Visual Cryptography Schemes Feng Liu, Chuankun Wu, Senior Member, IEEE, and Xijun Lin

AbstractTwo common drawbacks of the visual cryptography scheme (VCS) are the large pixel expansion of each share image and the small contrast of the recovered secret image. In this paper, we propose a step construction to construct VCS and VCS for general access structure by applying (2,2)-VCS recursively, where a participant may receive multiple share images. The proposed step construction generates VCS and VCS which have optimal pixel expansion and contrast for each qualied set in the general access structure in most cases. Our scheme applies a technique to simplify the access structure, which can reduce the average pixel expansion (APE) in most cases compared with many of the results in the literature. Finally, we give some experimental results and comparisons to show the effectiveness of the proposed scheme.
ASPYRA INFOTECH, NOORANAD P.O, ALAPPUZHA, KERALA, 04792387724, aspyrainfotech@gmail.com Page 12

Seminar Topics Index Terms: Secret sharing, step construction, visual cryptography. REFERENCE

1. IEEE TRANSACTIONS MARCH 2010

ON BIOMEDICAL

ENGINEERING,

2. IEEE TRANSACTIONS ON DEVELOPMENT, MARCH 2010

AUTONOMOUS

MENTAL

3. IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 4. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, MARCH 2010
5.

IEEE TRANSACTIONS ON ACOUSTICS, SIGNAL PROCESSING, MARCH 2009

SPEECH,

AND

6. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, MARCH 2010 7. IEEE TRANSACTIONS ON ACOUSTICS, SIGNAL PROCESSING, MARCH 2008 SPEECH, AND

ASPYRA INFOTECH, NOORANAD P.O, ALAPPUZHA, KERALA, 04792387724, aspyrainfotech@gmail.com Page 13

Você também pode gostar