• Title/Summary/Keyword: Locality sensitive hashing

Search Result 11, Processing Time 0.023 seconds

Locality-Sensitive Hashing Techniques for Nearest Neighbor Search

  • Lee, Keon Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.4
    • /
    • pp.300-307
    • /
    • 2012
  • When the volume of data grows big, some simple tasks could become a significant concern. Nearest neighbor search is such a task which finds from a data set the k nearest data points to queries. Locality-sensitive hashing techniques have been developed for approximate but fast nearest neighbor search. This paper introduces the notion of locality-sensitive hashing and surveys the locality-sensitive hashing techniques. It categories them based on several criteria, presents their characteristics, and compares their performance.

Locality-Sensitive Hashing for Data with Categorical and Numerical Attributes Using Dual Hashing

  • Lee, Keon Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.98-104
    • /
    • 2014
  • Locality-sensitive hashing techniques have been developed to efficiently handle nearest neighbor searches and similar pair identification problems for large volumes of high-dimensional data. This study proposes a locality-sensitive hashing method that can be applied to nearest neighbor search problems for data sets containing both numerical and categorical attributes. The proposed method makes use of dual hashing functions, where one function is dedicated to numerical attributes and the other to categorical attributes. The method consists of creating indexing structures for each of the dual hashing functions, gathering and combining the candidates sets, and thoroughly examining them to determine the nearest ones. The proposed method is examined for a few synthetic data sets, and results show that it improves performance in cases of large amounts of data with both numerical and categorical attributes.

Enhanced Locality Sensitive Clustering in High Dimensional Space

  • Chen, Gang;Gao, Hao-Lin;Li, Bi-Cheng;Hu, Guo-En
    • Transactions on Electrical and Electronic Materials
    • /
    • v.15 no.3
    • /
    • pp.125-129
    • /
    • 2014
  • A dataset can be clustered by merging the bucket indices that come from the random projection of locality sensitive hashing functions. It should be noted that for this to work the merging interval must be calculated first. To improve the feasibility of large scale data clustering in high dimensional space we propose an enhanced Locality Sensitive Hashing Clustering Method. Firstly, multiple hashing functions are generated. Secondly, data points are projected to bucket indices. Thirdly, bucket indices are clustered to get class labels. Experimental results showed that on synthetic datasets this method achieves high accuracy at much improved cluster speeds. These attributes make it well suited to clustering data in high dimensional space.

An Improvement in K-NN Graph Construction using re-grouping with Locality Sensitive Hashing on MapReduce (MapReduce 환경에서 재그룹핑을 이용한 Locality Sensitive Hashing 기반의 K-Nearest Neighbor 그래프 생성 알고리즘의 개선)

  • Lee, Inhoe;Oh, Hyesung;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.11
    • /
    • pp.681-688
    • /
    • 2015
  • The k nearest neighbor (k-NN) graph construction is an important operation with many web-related applications, including collaborative filtering, similarity search, and many others in data mining and machine learning. Despite its many elegant properties, the brute force k-NN graph construction method has a computational complexity of $O(n^2)$, which is prohibitive for large scale data sets. Thus, (Key, Value)-based distributed framework, MapReduce, is gaining increasingly widespread use in Locality Sensitive Hashing which is efficient for high-dimension and sparse data. Based on the two-stage strategy, we engage the locality sensitive hashing technique to divide users into small subsets, and then calculate similarity between pairs in the small subsets using a brute force method on MapReduce. Specifically, generating a candidate group stage is important since brute-force calculation is performed in the following step. However, existing methods do not prevent large candidate groups. In this paper, we proposed an efficient algorithm for approximate k-NN graph construction by regrouping candidate groups. Experimental results show that our approach is more effective than existing methods in terms of graph accuracy and scan rate.

Automated Fact Checking Model Using Efficient Transfomer (효율적인 트랜스포머를 이용한 팩트체크 자동화 모델)

  • Yun, Hee Seung;Jung, Jason J.
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.9
    • /
    • pp.1275-1278
    • /
    • 2021
  • Nowadays, fake news from newspapers and social media is a serious issue in news credibility. Some of machine learning methods (such as LSTM, logistic regression, and Transformer) has been applied for fact checking. In this paper, we present Transformer-based fact checking model which improves computational efficiency. Locality Sensitive Hashing (LSH) is employed to efficiently compute attention value so that it can reduce the computation time. With LSH, model can group semantically similar words, and compute attention value within the group. The performance of proposed model is 75% for accuracy, 42.9% and 75% for Fl micro score and F1 macro score, respectively.

A Study on Malware Clustering Technique Using API Call Sequence and Locality Sensitive Hashing (API 콜 시퀀스와 Locality Sensitive Hashing을 이용한 악성코드 클러스터링 기법에 관한 연구)

  • Goh, Dong Woo;Kim, Huy Kang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.1
    • /
    • pp.91-101
    • /
    • 2017
  • API call sequence analysis is a kind of analysis using API call information extracted in target program. Compared to other techniques, this is advantageous as it can characterize the behavior of the target. However, existing API call sequence analysis has an issue of identifying same characteristics to different function during the analysis. To resolve the identification issue and improve performance of analysis, this study includes the method of API abstraction technique in addition to existing analysis. From there on, similarity between target programs is computed and clustered into similar types by applying LSH to abstracted API call sequence from analyzed target. Thus, this study can attribute in improving the accuracy of the malware analysis based on discovered information on the types of malware identified.

Copy-move Forgery Detection Robust to Various Transformation and Degradation Attacks

  • Deng, Jiehang;Yang, Jixiang;Weng, Shaowei;Gu, Guosheng;Li, Zheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.9
    • /
    • pp.4467-4486
    • /
    • 2018
  • Trying to deal with the problem of low robustness of Copy-Move Forgery Detection (CMFD) under various transformation and degradation attacks, a novel CMFD method is proposed in this paper. The main advantages of proposed work include: (1) Discrete Analytical Fourier-Mellin Transform (DAFMT) and Locality Sensitive Hashing (LSH) are combined to extract the block features and detect the potential copy-move pairs; (2) The Euclidian distance is incorporated in the pixel variance to filter out the false potential copy-move pairs in the post-verification step. In addition to extracting the effective features of an image block, the DAMFT has the properties of rotation and scale invariance. Unlike the traditional lexicographic sorting method, LSH is robust to the degradations of Gaussian noise and JEPG compression. Because most of the false copy-move pairs locate closely to each other in the spatial domain or are in the homogeneous regions, the Euclidian distance and pixel variance are employed in the post-verification step. After evaluating the proposed method by the precision-recall-$F_1$ model quantitatively based on the Image Manipulation Dataset (IMD) and Copy-Move Hard Dataset (CMHD), our method outperforms Emam et al.'s and Li et al.'s works in the recall and $F_1$ aspects.

k-NN Join Based on LSH in Big Data Environment

  • Ji, Jiaqi;Chung, Yeongjee
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.2
    • /
    • pp.99-105
    • /
    • 2018
  • k-Nearest neighbor join (k-NN Join) is a computationally intensive algorithm that is designed to find k-nearest neighbors from a dataset S for every object in another dataset R. Most related studies on k-NN Join are based on single-computer operations. As the data dimensions and data volume increase, running the k-NN Join algorithm on a single computer cannot generate results quickly. To solve this scalability problem, we introduce the locality-sensitive hashing (LSH) k-NN Join algorithm implemented in Spark, an approach for high-dimensional big data. LSH is used to map similar data onto the same bucket, which can reduce the data search scope. In order to achieve parallel implementation of the algorithm on multiple computers, the Spark framework is used to accelerate the computation of distances between objects in a cluster. Results show that our proposed approach is fast and accurate for high-dimensional and big data.

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.

Multi-Object Tracking Based on Keypoints Using Homography in Mobile Environments (모바일 환경 Homography를 이용한 특징점 기반 다중 객체 추적)

  • Han, Woo ri;Kim, Young-Seop;Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.14 no.3
    • /
    • pp.67-72
    • /
    • 2015
  • This paper proposes an object tracking system based on keypoints using homography in mobile environments. The proposed system is based on markerless tracking, and there are four modules which are recognition, tracking, detecting and learning module. Recognition module detects and identifies an object to be matched on current frame correspond to the database using LSH through SURF, and then this module generates a standard object information. Tracking module tracks an object using homography information that generate by being matched on the learned object keypoints to the current object keypoints. Then update the window included the object for defining object's pose. Detecting module finds out the object based on having the best possible knowledge available among the learned objects information, when the system fails to track. The experimental results show that the proposed system is able to recognize and track objects with updating object's pose for the use of mobile platform.