DSpace width= university logo mark
Japanese | English 

KURA > B. 理工学域・研究域/理工学部/自然科学研究科 > b10. 学術雑誌掲載論文 > 1.査読済論文(工) >


ファイル 記述 サイズフォーマット
TE-PR-NAKAYAMA-K-1494.pdf359.79 kBAdobe PDF
タイトル: Approximationg many valued mappings using a recurrent neural network
著者: Tomikawa, Y.
Nakayama, Kenji link image
中山, 謙二
発行日: 1998年 5月
出版社(者): IEEE(Institute of Electrical and Electronics Engineers)
雑誌名: IEEE&INNS Proc. of IJCNN'98, Anchorage
巻: 2
開始ページ: 1494
終了ページ: 1497
抄録: In this paper, a recurrent neural network (RNN) is applied to approximating one to N many valued mappings. The RNN described in this paper has a feedback loop from an output to an input in addition to the conventional multi layer neural network (MLNN). The feedback loop causes dynamic output properties. The convergence property in these properties can be used for this approximating problem. In order to avoid conflict by the overlapped target data y*s to the same input x., the input data set (x*, y*) and the target data y* are presented to the network in learning phase. By this learning, the network function f(x, z) which satisfies y* = f(x*,y*) is formed. In recalling phase, the solutions y of y = f(x,y) are detected by the feedback dynamics of RNN. The different solutions for the same input x can be gained by changing the initial output value of y. It have been presented in our previous paper that the RNN can approximate many valued continuous mappings by introducing the differential condition to learning. However, if the mapping has discontinuity or changes of value number, it sometimes shows undesirable behavior. In this paper, the integral condition is proposed in order to prevent spurious convergence and to spread the attractive regions to the approximating points.
URI: http://hdl.handle.net/2297/6814
資料種別: Conference Paper
版表示: publisher

このアイテムを引用あるいはリンクする場合は次の識別子を使用してください。 http://hdl.handle.net/2297/6814



Valid XHTML 1.0! DSpace Software Copyright © 2002-2010  Duraspace - ご意見をお寄せください