DSpace width= university logo mark
Japanese | English 

KURA > B. 理工学域・研究域/理工学部/自然科学研究科 > b10. 学術雑誌掲載論文 > 1.査読済論文(工) >

全文を表示する

ファイル 記述 サイズフォーマット
TE-PR-NAKAYAMA-K-364.pdf756.17 kBAdobe PDF
見る/開く
タイトル: A cascade form predictor of neural and FIR filters and its minimum size estimation based on nonlinearity analysis of time series
著者: Khalaf, Ashraf A.M.
Nakayama, Kenji link image
中山, 謙二
発行日: 1998年 3月
雑誌名: IEICE transactions on fundamentals of electronics, communications and computer sciences
ISSN: 0916-8516
巻: E81-A
号: 3
開始ページ: 364
終了ページ: 373
キーワード: Cascade form predictor
FIR filters
Input dimension estimation
Multilayer neural networks
Nonlinear prediction
Nonlinearity analysis
Time series prediction
抄録: Time series prediction is very important technology in a wide variety of fields. The actual time series contains both linear and nonlinear properties. The amplitude of the time series to be predicted is usually continuous value. For these reasons, we combine nonlinear and linear predictors in a cascade form. The nonlinear prediction problem is reduced to a pattern classification. A set of the past samples x(n - 1), . . . , x(n - N) is transformed into the output, which is the prediction of the next coming sample x(n). So, we employ a multi-layer neural network with a sigmoidal hidden layer and a single linear output neuron for the nonlinear prediction. It is called a Nonlinear Sub-Predictor (NSP). The NSP is trained by the supervised learning algorithm using the sample x(n) as a target. However, it is rather difficult to generate the continuous amplitude and to predict linear property. So, we employ a linear predictor after the NSP. An FIR filter is used for this purpose, which is called a Linear Sub-Predictor (LSP). The LSP is trained by the supervised learning algorithm using also i(n) as a target. In order to estimate the minimum size of the proposed predictor, we analyze the nonlinearity of the time series of interest. The prediction is equal to mapping a set of past samples to the next coming sample. The multi-layer neural network is good for this kind of pattern mapping. Still, difficult mappings may exist when several sets of very similar patterns are mapped onto very different samples. The degree of difficulty of the mapping is closely related to the nonlinearity. The necessary number of the past samples used for prediction is determined by this nonlinearity. The difficult mapping requires a large number of the past samples. Computer simulations using the sunspot data and the artificially generated discrete amplitude data have demonstrated the efficiency of the proposed predictor and the nonlinearity analysis.
URI: http://hdl.handle.net/2297/5646
資料種別: Journal Article
権利関係: (社)電子情報通信学会の許諾を得て登録
版表示: publisher
出現コレクション:1.査読済論文(工)

このアイテムを引用あるいはリンクする場合は次の識別子を使用してください。 http://hdl.handle.net/2297/5646

本リポジトリに保管されているアイテムはすべて著作権により保護されています。

 

Valid XHTML 1.0! DSpace Software Copyright © 2002-2010  Duraspace - ご意見をお寄せください