• Title, Summary, Keyword: 최적화 알고리즘

Search Result 3,166, Processing Time 0.047 seconds

Performance Analysis and Comparison of Stream Ciphers for Secure Sensor Networks (안전한 센서 네트워크를 위한 스트림 암호의 성능 비교 분석)

  • Yun, Min;Na, Hyoung-Jun;Lee, Mun-Kyu;Park, Kun-Soo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.5
    • /
    • pp.3-16
    • /
    • 2008
  • A Wireless Sensor Network (WSN for short) is a wireless network consisting of distributed small devices which are called sensor nodes or motes. Recently, there has been an extensive research on WSN and also on its security. For secure storage and secure transmission of the sensed information, sensor nodes should be equipped with cryptographic algorithms. Moreover, these algorithms should be efficiently implemented since sensor nodes are highly resource-constrained devices. There are already some existing algorithms applicable to sensor nodes, including public key ciphers such as TinyECC and standard block ciphers such as AES. Stream ciphers, however, are still to be analyzed, since they were only recently standardized in the eSTREAM project. In this paper, we implement over the MicaZ platform nine software-based stream ciphers out of the ten in the second and final phases of the eSTREAM project, and we evaluate their performance. Especially, we apply several optimization techniques to six ciphers including SOSEMANUK, Salsa20 and Rabbit, which have survived after the final phase of the eSTREAM project. We also present the implementation results of hardware-oriented stream ciphers and AES-CFB fur reference. According to our experiment, the encryption speeds of these software-based stream ciphers are in the range of 31-406Kbps, thus most of these ciphers are fairly acceptable fur sensor nodes. In particular, the survivors, SOSEMANUK, Salsa20 and Rabbit, show the throughputs of 406Kbps, 176Kbps and 121Kbps using 70KB, 14KB and 22KB of ROM and 2811B, 799B and 755B of RAM, respectively. From the viewpoint of encryption speed, the performances of these ciphers are much better than that of the software-based AES, which shows the speed of 106Kbps.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Evaluating efficiency of Coaxial MLC VMAT plan for spine SBRT (Spine SBRT 치료시 Coaxial MLC VMAT plan의 유용성 평가)

  • Son, Sang Jun;Mun, Jun Ki;Kim, Dae Ho;Yoo, Suk Hyun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.313-320
    • /
    • 2014
  • Purpose : The purpose of the study is to evaluate the efficiency of Coaxial MLC VMAT plan (Using $273^{\circ}$ and $350^{\circ}$ collimator angle) That the leaf motion direction aligned with axis of OAR (Organ at risk, It means spinal cord or cauda equine in this study.) compare to Universal MLC VMAT plan (using $30^{\circ}$ and $330^{\circ}$ collimator angle) for spine SBRT. Materials and Methods : The 10 cases of spine SBRT that treated with VMAT planned by Coaxial MLC and Varian TBX were enrolled. Those cases were planned by Eclipse (Ver. 10.0.42, Varian, USA), PRO3 (Progressive Resolution Optimizer 10.0.28) and AAA (Anisotropic Analytic Algorithm Ver. 10.0.28) with coplanar $360^{\circ}$ arcs and 10MV FFF (Flattening filter free). Each arc has $273^{\circ}$ and $350^{\circ}$ collimator angle, respectively. The Universal MLC VMAT plans are based on existing treatment plans. Those plans have the same parameters of existing treatment plans but collimator angle. To minimize the dose difference that shows up randomly on optimizing, all plans were optimized and calculated twice respectively. The calculation grid is 0.2 cm and all plans were normalized to the target V100%=90%. The indexes of evaluation are V10Gy, D0.03cc, Dmean of OAR (Organ at risk, It means spinal cord or cauda equine in this study.), H.I (Homogeneity index) of the target and total MU. All Coaxial VMAT plans were verified by gamma test with Mapcheck2 (Sun Nuclear Co., USA), Mapphan (Sun Nuclear Co., USA) and SNC patient (Sun Nuclear Co., USA Ver 6.1.2.18513). Results : The difference between the coaxial and the universal VMAT plans are follow. The coaxial VMAT plan is better in the V10Gy of OAR, Up to 4.1%, at least 0.4%, the average difference was 1.9% and In the D0.03cc of OAR, Up to 83.6 cGy, at least 2.2 cGy, the average difference was 33.3 cGy. In Dmean, Up to 34.8 cGy, at least -13.0 cGy, the average difference was 9.6 cGy that say the coaxial VMAT plans are better except few cases. H.I difference Up to 0.04, at least 0.01, the average difference was 0.02 and the difference of average total MU is 74.1 MU. The coaxial MLC VMAT plan is average 74.1 MU lesser then another. All IMRT verification gamma test results for the coaxial MLC VMAT plan passed over 90.0% at 1mm / 2%. Conclusion : Coaxial MLC VMAT treatment plan appeared to be favorable in most cases than the Universal MLC VMAT treatment planning. It is efficient in lowering the dose of the OAR V10Gy especially. As a result, the Coaxial MLC VMAT plan could be better than the Universal MLC VMAT plan in same MU.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Evaluating efficiency of Split VMAT plan for prostate cancer radiotherapy involving pelvic lymph nodes (골반 림프선을 포함한 전립선암 치료 시 Split VMAT plan의 유용성 평가)

  • Mun, Jun Ki;Son, Sang Jun;Kim, Dae Ho;Seo, Seok Jin
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.2
    • /
    • pp.145-156
    • /
    • 2015
  • Purpose : The purpose of this study is to evaluate the efficiency of Split VMAT planning(Contouring rectum divided into an upper and a lower for reduce rectum dose) compare to Conventional VMAT planning(Contouring whole rectum) for prostate cancer radiotherapy involving pelvic lymph nodes. Materials and Methods : A total of 9 cases were enrolled. Each case received radiotherapy with Split VMAT planning to the prostate involving pelvic lymph nodes. Treatment was delivered using TrueBeam STX(Varian Medical Systems, USA) and planned on Eclipse(Ver. 10.0.42, Varian, USA), PRO3(Progressive Resolution Optimizer 10.0.28), AAA(Anisotropic Analytic Algorithm Ver. 10.0.28). Lower rectum contour was defined as starting 1cm superior and ending 1cm inferior to the prostate PTV, upper rectum is a part, except lower rectum from the whole rectum. Split VMAT plan parameters consisted of 10MV coplanar $360^{\circ}$ arcs. Each arc had $30^{\circ}$ and $30^{\circ}$ collimator angle, respectively. An SIB(Simultaneous Integrated Boost) treatment prescription was employed delivering 50.4Gy to pelvic lymph nodes and 63~70Gy to the prostate in 28 fractions. $D_{mean}$ of whole rectum on Split VMAT plan was applied for DVC(Dose Volume Constraint) of the whole rectum for Conventional VMAT plan. In addition, all parameters were set to be the same of existing treatment plans. To minimize the dose difference that shows up randomly on optimizing, all plans were optimized and calculated twice respectively using a 0.2cm grid. All plans were normalized to the prostate $PTV_{100%}$ = 90% or 95%. A comparison of $D_{mean}$ of whole rectum, upperr ectum, lower rectum, and bladder, $V_{50%}$ of upper rectum, total MU and H.I.(Homogeneity Index) and C.I.(Conformity Index) of the PTV was used for technique evaluation. All Split VMAT plans were verified by gamma test with portal dosimetry using EPID. Results : Using DVH analysis, a difference between the Conventional and the Split VMAT plans was demonstrated. The Split VMAT plan demonstrated better in the $D_{mean}$ of whole rectum, Up to 134.4 cGy, at least 43.5 cGy, the average difference was 75.6 cGy and in the $D_{mean}$ of upper rectum, Up to 1113.5 cGy, at least 87.2 cGy, the average difference was 550.5 cGy and in the $D_{mean}$ of lower rectum, Up to 100.5 cGy, at least -34.6 cGy, the average difference was 34.3 cGy and in the $D_{mean}$ of bladder, Up to 271 cGy, at least -55.5 cGy, the average difference was 117.8 cGy and in $V_{50%}$ of upper rectum, Up to 63.4%, at least 3.2%, the average difference was 23.2%. There was no significant difference on H.I., and C.I. of the PTV among two plans. The Split VMAT plan is average 77 MU more than another. All IMRT verification gamma test results for the Split VMAT plan passed over 90.0% at 2 mm / 2%. Conclusion : As a result, the Split VMAT plan appeared to be more favorable in most cases than the Conventional VMAT plan for prostate cancer radiotherapy involving pelvic lymph nodes. By using the split VMAT planning technique it was possible to reduce the upper rectum dose, thus reducing whole rectal dose when compared to conventional VMAT planning. Also using the split VMAT planning technique increase the treatment efficiency.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Implementation of Man-made Tongue Immobilization Devices in Treating Head and Neck Cancer Patients (두 경부 암 환자의 방사선치료 시 자체 제작한 고정 기구 유용성의 고찰)

  • Baek, Jong-Geal;Kim, Joo-Ho;Lee, Sang-Kyu;Lee, Won-Joo;Yoon, Jong-Won;Cho, Jeong-Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.20 no.1
    • /
    • pp.1-9
    • /
    • 2008
  • Purpose: For head and neck cancer patients treated with radiation therapy, proper immobilization of intra-oral structures is crucial in reproducing treatment positions and optimizing dose distribution. We produced a man-made tongue immobilization device for each patient subjected to this study. Reproducibility of treatment positions and dose distributions at air-and-tissue interface were compared using man-made tongue immobilization devices and conventional tongue-bites. Materials and Methods: Dental alginate and putty were used in producing man-made tongue immobilization devices. In order to evaluate reproducibility of treatment positions, all patients were CT-simulated, and linac-gram was repeated 5 times with each patient in the treatment position. An acrylic phantom was devised in order to evaluate safety of man-made tongue immobilization devices. Air, water, alginate and putty were placed in the phantom and dose distributions at air-and-tissue interface were calculated using Pinnacle (version 7.6c, Phillips, USA) and measured with EBT film. Two different field sizes (3$\times$3 cm and 5$\times$5 cm) were used for comparison. Results: Evaluation of linac grams showed reproducibility of a treatment position was 4 times more accurate with man-made tongue immobilization devices compared with conventional tongue bites. Patients felt more comfortable using customized tongue immobilization devices during radiation treatment. Air-and-tissue interface dose distributions calculated using Pinnacle were 7.78% and 0.56% for 3$\times$3 cm field and 5$\times$5 cm field respectively. Dose distributions measured with EBT (international specialty products, USA) film were 36.5% and 11.8% for 3$\times$3 cm field and 5$\times$5 cm field respectively. Values from EBT film were higher. Conclusion: Using man-made tongue immobilization devices made of dental alginate and putty in treatment of head and neck cancer patients showed higher reproducibility of treatment position compared with using conventional mouth pieces. Man-made immobilization devices can help optimizing air-and-tissue interface dose distributions and compensating limited accuracy of radiotherapy planning systems in calculating air-tissue interface dose distributions.

  • PDF

A Development of Traffic Queue Length Measuring Algorithm Using ILD(Inductive Loop Detector) Based on COSMOS (실시간 신호제어시스템의 대기길이 추정 알고리즘 개발)

  • seong ki-ju;Lee choul-ki;Jeong Jun-ha;Lee young-in;Park dae-hyun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.3 no.1
    • /
    • pp.85-96
    • /
    • 2004
  • The study begin with a basic concept, if the occupancy length of vehicle detector is directly proportional to the delay of vehicle. That is, it analogize vehicle's delay of a occupancy time. The results of a study was far superior in the estimation of a queue length. It is a very good points the operator is not necessary to optimize s1, s2, Thdoc. Thdoc(critical congestion degree) replaced 0.7 with 0.2 - 0.3. But, if vehicles have been experience in delay was not occupy vehicle detector, the study is in existence some problems. In conclusion, it is necessary that stretch queue detector or install paired queue detector. Also I want to be made steady progress a following study relation to this study, because it is required traffic signal control on congestion.

  • PDF

Implicit Distinction of the Race Underlying the Perception of Faces by Event-Related fMRI (Event-related 기능적 MRI 영상을 통한 얼굴인식과정에서 수반되는 무의식적인 인종구별)

  • Kim Jeong-Seok;Kim Bum-Soo;Jeun Sin-Soo;Jung So-Lyung;Choe Bo-Young
    • Investigative Magnetic Resonance Imaging
    • /
    • v.9 no.1
    • /
    • pp.43-49
    • /
    • 2005
  • A few studies have shown that the function of fusiform face area is selectively involved in the perception of faces including a race difference. We investigated the neural substrates of the face-selective region called fusiform face area in the ventral occipital-temporal cortex and same-race memory superiority in the fusiform face area by the event-related fMRI. In our fMRI study, subjects (Oriental-Korean) performed the implicit distinction of the race while they consciously made familiar-judgments, regardless of whether they considered a face as Oriental-Korean or European-American. For race distinction as an implicit task, the fusiform face areas (FFA) and the right parahippocampal gyrus had a greater response to the presentation of Oriental-Korean faces than for the European-American faces, but in the conscious race distinction between Oriental-Korean and European-American faces, there was no significant difference observed in the FFA. These results suggest that different activation in the fusiform regions and right parahippocampal gyrus resulting from superiority of same-race memory could have implicitly taken place by the physiological processes of face recognition.

  • PDF

An Energy Efficient Cluster Management Method based on Autonomous Learning in a Server Cluster Environment (서버 클러스터 환경에서 자율학습기반의 에너지 효율적인 클러스터 관리 기법)

  • Cho, Sungchul;Kwak, Hukeun;Chung, Kyusik
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.6
    • /
    • pp.185-196
    • /
    • 2015
  • Energy aware server clusters aim to reduce power consumption at maximum while keeping QoS(Quality of Service) compared to energy non-aware server clusters. They adjust the power mode of each server in a fixed or variable time interval to let only the minimum number of servers needed to handle current user requests ON. Previous studies on energy aware server cluster put efforts to reduce power consumption further or to keep QoS, but they do not consider energy efficiency well. In this paper, we propose an energy efficient cluster management based on autonomous learning for energy aware server clusters. Using parameters optimized through autonomous learning, our method adjusts server power mode to achieve maximum performance with respect to power consumption. Our method repeats the following procedure for adjusting the power modes of servers. Firstly, according to the current load and traffic pattern, it classifies current workload pattern type in a predetermined way. Secondly, it searches learning table to check whether learning has been performed for the classified workload pattern type in the past. If yes, it uses the already-stored parameters. Otherwise, it performs learning for the classified workload pattern type to find the best parameters in terms of energy efficiency and stores the optimized parameters. Thirdly, it adjusts server power mode with the parameters. We implemented the proposed method and performed experiments with a cluster of 16 servers using three different kinds of load patterns. Experimental results show that the proposed method is better than the existing methods in terms of energy efficiency: the numbers of good response per unit power consumed in the proposed method are 99.8%, 107.5% and 141.8% of those in the existing static method, 102.0%, 107.0% and 106.8% of those in the existing prediction method for banking load pattern, real load pattern, and virtual load pattern, respectively.