• Title, Summary, Keyword: scheduling

Search Result 4,505, Processing Time 0.068 seconds

Transmission Method and Simulator Development with Channel bonding for a Mass Broadcasting Service in HFC Networks (HFC 망에서 대용량 방송서비스를 위한 채널 결합 기반 전송 방식 및 시뮬레이터 개발)

  • Shin, Hyun-Chul;Lee, Dong-Yul;You, Woong-Shik;Choi, Dong-Joon;Lee, Chae-Woo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.5
    • /
    • pp.834-845
    • /
    • 2011
  • Massive broadcasting contents such as UHD(Ultra High Definition) TV which requires multi-channel capacity for transmission has been introduced in recent years. A transmission scheme with channel bonding has been considered for transmission of massive broadcasting contents. In HFC(Hybrid Fiber Coaxial) networks, DOCSIS 3.0(Data Over Cable Service Interface Specification 3.0) has already applied channel bonding schemes for up/downstream of data service. A method unlike DOCSIS 3.0 is required to introduce a channel bonding scheme in the broadcasting service having unidirectional transmission with a downstream. Since a massive broadcasting content requires several channels for transmission, VBR(Variable Bit Rate) transmission has been emerging for the bandwidth efficiency. In addition, research on channel allocation and resource scheduling is required to guarantee QoS(Quality of Service) for the broadcasting service based on VBR. In this paper, we propose a transmission method for mass broadcasting service in HFC network and show the UHD transmission simulator developed to evaluate the performance. In order to evaluate the performance, we define various scenarios. Using the simulator, we assess the possibility of channel bonding and VBR transmission for UHD broadcasting system to provide mass broadcasting service efficiently. The developed simulator is expected to contribute to the efficient transmission system development of mass broadcasting service.

Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

  • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.125-155
    • /
    • 2010
  • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

A New Bias Scheduling Method for Improving Both Classification Performance and Precision on the Classification and Regression Problems (분류 및 회귀문제에서의 분류 성능과 정확도를 동시에 향상시키기 위한 새로운 바이어스 스케줄링 방법)

  • Kim Eun-Mi;Park Seong-Mi;Kim Kwang-Hee;Lee Bae-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1021-1028
    • /
    • 2005
  • The general solution for classification and regression problems can be found by matching and modifying matrices with the information in real world and then these matrices are teaming in neural networks. This paper treats primary space as a real world, and dual space that Primary space matches matrices using kernel. In practical study, there are two kinds of problems, complete system which can get an answer using inverse matrix and ill-posed system or singular system which cannot get an answer directly from inverse of the given matrix. Further more the problems are often given by the latter condition; therefore, it is necessary to find regularization parameter to change ill-posed or singular problems into complete system. This paper compares each performance under both classification and regression problems among GCV, L-Curve, which are well known for getting regularization parameter, and kernel methods. Both GCV and L-Curve have excellent performance to get regularization parameters, and the performances are similar although they show little bit different results from the different condition of problems. However, these methods are two-step solution because both have to calculate the regularization parameters to solve given problems, and then those problems can be applied to other solving methods. Compared with UV and L-Curve, kernel methods are one-step solution which is simultaneously teaming a regularization parameter within the teaming process of pattern weights. This paper also suggests dynamic momentum which is leaning under the limited proportional condition between learning epoch and the performance of given problems to increase performance and precision for regularization. Finally, this paper shows the results that suggested solution can get better or equivalent results compared with GCV and L-Curve through the experiments using Iris data which are used to consider standard data in classification, Gaussian data which are typical data for singular system, and Shaw data which is an one-dimension image restoration problems.

A Lower Bound Estimation on the Number of Micro-Registers in Time-Multiplexed FPGA Synthesis (시분할 FPGA 합성에서 마이크로 레지스터 개수에 대한 하한 추정 기법)

  • 엄성용
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.9
    • /
    • pp.512-522
    • /
    • 2003
  • For a time-multiplexed FPGA, a circuit is partitioned into several subcircuits, so that they temporally share the same physical FPGA device by hardware reconfiguration. In these architectures, all the hardware reconfiguration information called contexts are generated and downloaded into the chip, and then the pre-scheduled context switches occur properly and timely. Typically, the size of the chip required to implement the circuit depends on both the maximum number of the LUT blocks required to implement the function of each subcircuit and the maximum number of micro-registers to store results over context switches in the same time. Therefore, many partitioning or synthesis methods try to minimize these two factors. In this paper, we present a new estimation technique to find the lower bound on the number of micro-registers which can be obtained by any synthesis methods, respectively, without performing any actual synthesis and/or design space exploration. The lower bound estimation is very important in sense that it greatly helps to evaluate the results of the previous work and even the future work. If the estimated lower bound exactly matches the actual number in the actual design result, we can say that the result is guaranteed to be optimal. In contrast, if they do not match, the following two cases are expected: we might estimate a better (more exact) lower bound or we find a new synthesis result better than those of the previous work. Our experimental results show that there are some differences between the numbers of micro-registers and our estimated lower bounds. One reason for these differences seems that our estimation tries to estimate the result with the minimum micro-registers among all the possible candidates, regardless of usage of other resources such as LUTs, while the previous work takes into account both LUTs and micro-registers. In addition, it implies that our method may have some limitation on exact estimation due to the complexity of the problem itself in sense that it is much more complicated than LUT estimation and thus needs more improvement, and/or there may exist some other synthesis results better than those of the previous work.

HW/SW Partitioning Techniques for Multi-Mode Multi-Task Embedded Applications (멀티모드 멀티태스크 임베디드 어플리케이션을 위한 HW/SW 분할 기법)

  • Kim, Young-Jun;Kim, Tae-Whan
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.8
    • /
    • pp.337-347
    • /
    • 2007
  • An embedded system is called a multi-mode embedded system if it performs multiple applications by dynamically reconfiguring the system functionality. Further, the embedded system is called a multi-mode multi-task embedded system if it additionally supports multiple tasks to be executed in a mode. In this Paper, we address a HW/SW partitioning problem, that is, HW/SW partitioning of multi-mode multi-task embedded applications with timing constraints of tasks. The objective of the optimization problem is to find a minimal total system cost of allocation/mapping of processing resources to functional modules in tasks together with a schedule that satisfies the timing constraints. The key success of solving the problem is closely related to the degree of the amount of utilization of the potential parallelism among the executions of modules. However, due to an inherently excessively large search space of the parallelism, and to make the task of schedulabilty analysis easy, the prior HW/SW partitioning methods have not been able to fully exploit the potential parallel execution of modules. To overcome the limitation, we propose a set of comprehensive HW/SW partitioning techniques which solve the three subproblems of the partitioning problem simultaneously: (1) allocation of processing resources, (2) mapping the processing resources to the modules in tasks, and (3) determining an execution schedule of modules. Specifically, based on a precise measurement on the parallel execution and schedulability of modules, we develop a stepwise refinement partitioning technique for single-mode multi-task applications. The proposed techniques is then extended to solve the HW/SW partitioning problem of multi-mode multi-task applications. From experiments with a set of real-life applications, it is shown that the proposed techniques are able to reduce the implementation cost by 19.0% and 17.0% for single- and multi-mode multi-task applications over that by the conventional method, respectively.

Development of Intelligent ATP System Using Genetic Algorithm (유전 알고리듬을 적용한 지능형 ATP 시스템 개발)

  • Kim, Tai-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.131-145
    • /
    • 2010
  • The framework for making a coordinated decision for large-scale facilities has become an important issue in supply chain(SC) management research. The competitive business environment requires companies to continuously search for the ways to achieve high efficiency and lower operational costs. In the areas of production/distribution planning, many researchers and practitioners have developedand evaluated the deterministic models to coordinate important and interrelated logistic decisions such as capacity management, inventory allocation, and vehicle routing. They initially have investigated the various process of SC separately and later become more interested in such problems encompassing the whole SC system. The accurate quotation of ATP(Available-To-Promise) plays a very important role in enhancing customer satisfaction and fill rate maximization. The complexity for intelligent manufacturing system, which includes all the linkages among procurement, production, and distribution, makes the accurate quotation of ATP be a quite difficult job. In addition to, many researchers assumed ATP model with integer time. However, in industry practices, integer times are very rare and the model developed using integer times is therefore approximating the real system. Various alternative models for an ATP system with time lags have been developed and evaluated. In most cases, these models have assumed that the time lags are integer multiples of a unit time grid. However, integer time lags are very rare in practices, and therefore models developed using integer time lags only approximate real systems. The differences occurring by this approximation frequently result in significant accuracy degradations. To introduce the ATP model with time lags, we first introduce the dynamic production function. Hackman and Leachman's dynamic production function in initiated research directly related to the topic of this paper. They propose a modeling framework for a system with non-integer time lags and show how to apply the framework to a variety of systems including continues time series, manufacturing resource planning and critical path method. Their formulation requires no additional variables or constraints and is capable of representing real world systems more accurately. Previously, to cope with non-integer time lags, they usually model a concerned system either by rounding lags to the nearest integers or by subdividing the time grid to make the lags become integer multiples of the grid. But each approach has a critical weakness: the first approach underestimates, potentially leading to infeasibilities or overestimates lead times, potentially resulting in excessive work-inprocesses. The second approach drastically inflates the problem size. We consider an optimized ATP system with non-integer time lag in supply chain management. We focus on a worldwide headquarter, distribution centers, and manufacturing facilities are globally networked. We develop a mixed integer programming(MIP) model for ATP process, which has the definition of required data flow. The illustrative ATP module shows the proposed system is largely affected inSCM. The system we are concerned is composed of a multiple production facility with multiple products, multiple distribution centers and multiple customers. For the system, we consider an ATP scheduling and capacity allocationproblem. In this study, we proposed the model for the ATP system in SCM using the dynamic production function considering the non-integer time lags. The model is developed under the framework suitable for the non-integer lags and, therefore, is more accurate than the models we usually encounter. We developed intelligent ATP System for this model using genetic algorithm. We focus on a capacitated production planning and capacity allocation problem, develop a mixed integer programming model, and propose an efficient heuristic procedure using an evolutionary system to solve it efficiently. This method makes it possible for the population to reach the approximate solution easily. Moreover, we designed and utilized a representation scheme that allows the proposed models to represent real variables. The proposed regeneration procedures, which evaluate each infeasible chromosome, makes the solutions converge to the optimum quickly.

Performance of Drip Irrigation System in Banana Cultuivation - Data Envelopment Analysis Approach

  • Kumar, K. Nirmal Ravi;Kumar, M. Suresh
    • Agribusiness and Information Management
    • /
    • v.8 no.1
    • /
    • pp.17-26
    • /
    • 2016
  • India is largest producer of banana in the world producing 29.72 million tonnes from an area of 0.803 million ha with a productivity of 35.7 MT ha-1 and accounted for 15.48 and 27.01 per cent of the world's area and production respectively (www.nhb.gov.in). In India, Tamil Nadu leads other states both in terms of area and production followed by Maharashtra, Gujarat and Andhra Pradesh. In Rayalaseema region of Andhra Pradesh, Kurnool district had special reputation in the cultivation of banana in an area of 5765 hectares with an annual production of 2.01 lakh tonnes in the year 2012-13 and hence, it was purposively chosen for the study. On $23^{rd}$ November 2003, the Government of Andhra Pradesh has commenced a comprehensive project called 'Andhra Pradesh Micro Irrigation Project (APMIP)', first of its kind in the world so as to promote water use efficiency. APMIP is offering 100 per cent of subsidy in case of SC, ST and 90 per cent in case of other categories of farmers up to 5.0 acres of land. In case of acreage between 5-10 acres, 70 per cent subsidy and acreage above 10, 50 per cent of subsidy is given to the farmer beneficiaries. The sampling frame consists of Kurnool district, two mandals, four villages and 180 sample farmers comprising of 60 farmers each from Marginal (<1ha), Small (1-2ha) and Other (>2ha) categories. A well structured pre-tested schedule was employed to collect the requisite information pertaining to the performance of drip irrigation among the sample farmers and Data Envelopment Analysis (DEA) model was employed to analyze the performance of drip irrigation in banana farms. The performance of drip irrigation was assessed based on the parameters like: Land Development Works (LDW), Fertigation costs (FC), Volume of water supplied (VWS), Annual maintenance costs of drip irrigation (AMC), Economic Status of the farmer (ES), Crop Productivity (CP) etc. The first four parameters are considered as inputs and last two as outputs for DEA modelling purposes. The findings revealed that, the number of farms operating at CRS are more in number in other farms (46.66%) followed by marginal (45%) and small farms (28.33%). Similarly, regarding the number of farmers operating at VRS, the other farms are again more in number with 61.66 per cent followed by marginal (53.33%) and small farms (35%). With reference to scale efficiency, marginal farms dominate the scenario with 57 per cent followed by others (55%) and small farms (50%). At pooled level, 26.11 per cent of the farms are being operated at CRS with an average technical efficiency score of 0.6138 i.e., 47 out of 180 farms. Nearly 40 per cent of the farmers at pooled level are being operated at VRS with an average technical efficiency score of 0.7241. As regards to scale efficiency, nearly 52 per cent of the farmers (94 out of 180 farmers) at pooled level, either performed at the optimum scale or were close to the optimum scale (farms having scale efficiency values equal to or more than 0.90). Majority of the farms (39.44%) are operating at IRS and only 29 per cent of the farmers are operating at DRS. This signifies that, more resources should be provided to these farms operating at IRS and the same should be decreased towards the farms operating at DRS. Nearly 32 per cent of the farms are operating at CRS indicating efficient utilization of resources. Log linear regression model was used to analyze the major determinants of input use efficiency in banana farms. The input variables considered under DEA model were again considered as influential factors for the CRS obtained for the three categories of farmers. Volume of water supplied ($X_1$) and fertigation cost ($X_2$) are the major determinants of banana farms across all the farmer categories and even at pooled level. In view of their positive influence on the CRS, it is essential to strengthen modern irrigation infrastructure like drip irrigation and offer more fertilizer subsidies to the farmer to enhance the crop production on cost-effective basis in Kurnool district of Andhra Pradesh, India. This study further suggests that, the present era of Information Technology will help the irrigation management in the context of generating new techniques, extension, adoption and information. It will also guide the farmers in irrigation scheduling and quantifying the irrigation water requirements in accordance with the water availability in a particular season. So, it is high time for the Government of India to pay adequate attention towards the applications of 'Information and Communication Technology (ICT) and its applications in irrigation water management' for facilitating the deployment of Decision Supports Systems (DSSs) at various levels of planning and management of water resources in the country.

Current Status and Future Prospect of Plant Disease Forecasting System in Korea (우리 나라 식물병 발생예찰의 현황과 전망)

  • Kim, Choong-Hoe
    • Research in Plant Disease
    • /
    • v.8 no.2
    • /
    • pp.84-91
    • /
    • 2002
  • Disease forecasting in Korea was first studied in the Department of Fundamental Research, in the Central Agricultural Technology Institute in Suwon in 1947, where the dispersal of air-borne conidia of blast and brown spot pathogens in rice was examined. Disease forecasting system in Korea is operated based on information obtained from 200 main forecasting plots scattered around country (rice 150, economic crops 50) and 1,403 supplementary observational plots (rice 1,050, others 353) maintained by Korean government. Total number of target crops and diseases in both forecasting plots amount to 30 crops and 104 diseases. Disease development in the forecasting plots is examined by two extension agents specialized in disease forecasting, working in the national Agricul-tural Technology Service Center(ATSC) founded in each city and prefecture. The data obtained by the extension agents are transferred to a central organization, Rural Development Administration (RDA) through an internet-web system for analysis in a nation-wide forecasting program, and forwarded far the Central Forecasting Council consisted of 12 members from administration, university, research institution, meteorology station, and mass media to discuss present situation of disease development and subsequent progress. The council issues a forecasting information message, as a result of analysis, that is announced in public via mass media to 245 agencies including ATSC, who informs to local administration, the related agencies and farmers for implementation of disease control activity. However, in future successful performance of plant disease forecasting system is thought to be securing of excellent extension agents specialized in disease forecasting, elevation of their forecasting ability through continuous trainings, and furnishing of prominent forecasting equipments. Researches in plant disease forecasting in Korea have been concentrated on rice blast, where much information is available, but are substan-tially limited in other diseases. Most of the forecasting researches failed to achieve the continuity of researches on specialized topic, ignoring steady improvement towards practical use. Since disease forecasting loses its value without practicality, more efforts are needed to improve the practicality of the forecasting method in both spatial and temporal aspects. Since significance of disease forecasting is directly related to economic profit, further fore-casting researches should be planned and propelled in relation to fungicide spray scheduling or decision-making of control activities.

A Critical Path Search and The Project Activities Scheduling (임계경로 탐색과 프로젝트 활동 일정 수립)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.1
    • /
    • pp.141-150
    • /
    • 2012
  • This paper suggests a critical path search algorithm that can easily draw PERT/GANTT chart which manages and plans a project schedule. In order to evaluate a critical path that determines the project schedule, Critical Path Method (CPM) is generally utilized. However, CPM undergoes 5 stages to calculate the critical path for a network diagram that is previously designed according to correlative relationship and execution period of project execution activities. And it may not correctly evaluate $T_E$ (The Earliest Time), since it does not suggest the way how to determine the sequence of the nodes activities that calculate the $T_E$. Also, the sequence of the network diagram activities obtained from CPM cannot be visually represented, and hence Lucko suggested an algorithm which undergoes 9 stages. On the other hand, the suggested algorithm, first of all, decides the sequence in advance, by reallocating the nodes into levels after Breadth-First Search of the network diagram that is previously designed. Next, it randomly chooses nodes of each level and immediately determines the critical path only after calculation of $T_E$. Finally, it enables the representation of the execution sequence of the project activity to be seen precisely visual by means of a small movement of $T_E$ of the nodes that are not belonging to the critical path, on basis of the $T_E$ of the nodes which belong to the critical path. The suggested algorithm has been proved its applicability to 10 real project data. It is able to get the critical path from all the projects, and precisely and visually represented the execution sequence of the activities. Also, this has advantages of, firstly, reducing 5 stages of CPM into 1, simplifying Lucko's 9 stages into 2 stages that are used to clearly express the execution sequence of the activities, and directly converting the representation into PERT/GANTT chart.

Feasibility Test on Automatic Control of Soil Water Potential Using a Portable Irrigation Controller with an Electrical Resistance-based Watermark Sensor (전기저항식 워터마크센서기반 소형 관수장치의 토양 수분퍼텐셜 자동제어 효용성 평가)

  • Kim, Hak-Jin;Roh, Mi-Young;Lee, Dong-Hoon;Jeon, Sang-Ho;Hur, Seung-Oh;Choi, Jin-Yong;Chung, Sun-Ok;Rhee, Joong-Yong
    • Journal of Bio-Environment Control
    • /
    • v.20 no.2
    • /
    • pp.93-100
    • /
    • 2011
  • Maintenance of adequate soil water potential during the period of crop growth is necessary to support optimum plant growth and yields. A better understanding of soil water movement within and below the rooting zone can facilitate optimal irrigation scheduling aimed at minimizing the adverse effects of water stress on crop growth and development and the leaching of water below the root zone which can have adverse environmental effects. The objective of this study was to evaluate the feasibility of using a portable irrigation controller with an Watermark sensor for the cultivation of drip-irrigated vegetable crops in a greenhouse. The control capability of the irrigation controller for a soil water potential of -20 kPa was evaluated under summer conditions by cultivating 45-day-old tomato plants grown in three differently textured soils (sandy loam, loam, and loamy sands). Water contents through each soil profile were continuously monitored using three Sentek probes, each consisting of three capacitance sensors at 10, 20, and 30 cm depths. Even though a repeatable cycling of soil water potential occurred for the potential treatment, the lower limit of the Watermark (about 0 kPa) obtained in this study presented a limitation of using the Watermark sensor for optimal irrigation of tomato plants where -20 kPa was used as a point for triggering irrigations. This problem might be related to the slow response time and inadequate soil-sensor interface of the Watermark sensor as compared to a porous and ceramic cup-based tensiometer with a sensitive pressure transducer. In addition, the irrigation time of 50 to 60 min at each of the irrigation operation gave a rapid drop of the potential to zero, resulting in over irrigation of tomatoes. There were differences in water content among the three different soil types under the variable rate irrigation, showing a range of water contents of 16 to 24%, 17 to 28%, and 24 to 32% for loamy sand, sandy loam, and loam soils, respectively. The greatest rate increase in water content was observed in the top of 10 cm depth of sandy loam soil within almost 60 min from the start of irrigation.