DOI QR코드

DOI QR Code

Optimizing Garbage Collection Overhead of Host-level Flash Translation Layer for Journaling Filesystems

  • Son, Sehee (School of Computer Science and Engineering, Pusan National University) ;
  • Ahn, Sungyong (School of Computer Science and Engineering, Pusan National University)
  • Received : 2021.01.27
  • Accepted : 2021.02.07
  • Published : 2021.05.31

Abstract

NAND flash memory-based SSD needs an internal software, Flash Translation Layer(FTL) to provide traditional block device interface to the host because of its physical constraints, such as erase-before-write and large erase block. However, because useful host-side information cannot be delivered to FTL through the narrow block device interface, SSDs suffer from a variety of problems such as increasing garbage collection overhead, large tail-latency, and unpredictable I/O latency. Otherwise, the new type of SSD, open-channel SSD exposes the internal structure of SSD to the host so that underlying NAND flash memory can be managed directly by the host-level FTL. Especially, I/O data classification by using host-side information can achieve the reduction of garbage collection overhead. In this paper, we propose a new scheme to reduce garbage collection overhead of open-channel SSD by separating the journal from other file data for the journaling filesystem. Because journal has different lifespan with other file data, the Write Amplification Factor (WAF) caused by garbage collection can be reduced. The proposed scheme is implemented by modifying the host-level FTL of Linux and evaluated with both Fio and Filebench. According to the experiment results, the proposed scheme improves I/O performance by 46%~50% while reducing the WAF of open-channel SSDs by more than 33% compared to the previous one.

Keywords

Acknowledgement

This work was supported by a 2-Year Research Grant of Pusan National University.

References

  1. M. Hao, G. Soundararajan, D. Kenchammana-Hosekote, A. A. Chien, and H. S. Gunawi, "The tail at store: a revelation from millions of hours of disk and SSD deployments," in Proc. 14th USENIX Conference on File and Storage Technologies (FAST '16), pp. 263-276, Feb. 22-25, 2016. DOI: https://dl.acm.org/doi/10.5555/2930583.2930603
  2. F. Chen, T. Luo, and X. Zhang, "CAFTL : A Content-Aware Flash Translation Layer Enhancing the Lifespan of Flash Memory based Solid State Drives," in Proc. 9th USENIX Conference on File and Storage Technologies (FAST '11), Feb. 15-17, 2011. DOI: https://dl.acm.org/doi/10.5555/1960475.1960481
  3. J. Kim, H. Kim, S. Lee, and Y. Won, "FTL design for TRIM command," in Proc. 5th International Workshop on Software Support for Portable Storage (IWSSPS 2010), pp. 7-12, Oct. 28, 2010.
  4. J. Kang, J. Hyun, H. Maeng, and S. Cho, "The Multi-streamed Solid-State Drive," in Proc. 6th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage '14), June 17-18, 2014. DOI: https://dl.acm.org/doi/abs/10.5555/2696578.2696591
  5. NVMe overview. https://www.nvmexpress.org/wpcontent/uploads/NVMe_Overview.pdf.
  6. Open-channel Solid State Drives. https://openchannelssd.readthedocs.io/en/latest/.
  7. M. Bjorling, C. Labs, J. Gonzalez, F. March, and S. Clara, "LightNVM: The Linux Open-channel SSD Subsystem," in Proc. 15th USENIX Conference on File Storage Technologies (FAST '17), pp. 359-374, Feb. 27-March 2, 2017. DOI: https://dl.acm.org/doi/abs/10.5555/3129633.3129666
  8. I. L. Picoli, N. Hedam, P. Bonnet, and P. Tozun, "Open-channel SSD (What is it Good For)," in Proc. 10th Annual Conference on Innovative Data Systems Research (CIDR '20), Jan. 12-15, 2020.
  9. A. Mathur, M. Cao, S. Bhattacharya, A. Dilger, A. Tomas, and L. Vivier, "The New Ext4 Filesystem: Current Status and Future Plans," in Proc. Linux Symposium, Vol. 2, pp. 21-33, 2007.
  10. S. Kim and E. Lee, "Analysis and Improvement of I/O Performance Degradation by Journaling in a Virtualized Environment," The Journal of the Institute of Internet, Broadcasting and Communication(JIIBC), Vol. 16, No. 6, pp. 177-181, Dec. 2016. https://doi.org/10.7236/JIIBC.2016.16.6.177
  11. P. O'Neil, E. Cheng, D. Gawlick, and E O'Neil, "The log-structured merge-tree (LSM-tree)", Acta Informatica, Vol. 33, No. 4, pp. 351-385, June 1996. DOI: https://doi.org/10.1007/s002360050048
  12. LevelDB. https://github.com/google/leveldb.
  13. RocksDB. https://github.com/facebook/rocksdb.
  14. P. Wang, G. Sun, S. Jiang, J. Ouyang, S. Lin, C. Zhang, and J. Cong. "An efficient design and implementation of LSM-tree based key-value store on open-channel SSD," in Proc. of the 9th European Conference on Computer Systems (EuroSys '14), pp. 1-14, April 2014. DOI: https://doi.org/10.1145/2592798.2592804
  15. RocksDB on Open-Channel SSDs. https://javigongon.files.wordpress.com/2011/12/rocksdbmeetup.pdf.
  16. J. Huang, A. Badam, L. Caulfield, S. Nath, S. Sengupta, B. Sharma, and M. K. Qureshi "Flashblox: Achieving both performance isolation and uniform lifetime for virtualized ssds," in Proc. 15th USENIX Conference on File and Storage Technologies (FAST '17), pp. 375-390, Feb. 27-March 2, 2017. DOI: https://dl.acm.org/doi/10.5555/3129633.3129667
  17. QEMU Open-channel SSD 2.0. https://github.com/OpenChannelSSD/qemu-nvme.
  18. Fio - Flexible I/O tester rev. 3.23. https://fio.readthedocs.io/en/latest/fio_doc.html.
  19. V. Tarasov, E. Zadok, and S. Shepler, "Filebench: A Flexible Framework for File System Benchmarking," USENIX ;login, Vol. 41, No. 1, pp. 6-12, April 2016.