19th May, 2023, 06:29 AM
Hello,
one Kodi plugin downloads repeatedly a file worth tens of MB data and to save SDHC card writing cycles, I have found using "df -Th", that only tmpfs (RAM based filesystem per my understanding) is /dev and /run
# df -Th|grep tmpfs
I may utilize /dev/bigfile.name if i "chown xbian /dev", but since it may not be good practice, but /run seems to have already writing permissions for all users "Access: (1777/drwxrwxrwt)". Beside that I was able to do:
(allowing me to utilize /mnt/tmpfs/ directory as a RAM disk storage)
Yet someone suggested me to use zram with lz4 compression (claiming to be better compression than Xbian default lzo-rle) to save the space. Though i have doubts about this approach since zram is slower than regular tmpfs and if tmps/memory pressure is high, it will then move to zram swap per my understanding). But anyway if i want to do it, i tried these steps (failed):
# zramctl --output-all
# mount -t ext4 /dev/zram0 /mnt/zram
# mount -t tmpfs /dev/zram0 /mnt/zram
# df -h
I am able to mount it automatically using /etc/fstab line:
I could do:
# swapoff -a;zramctl --algorithm lz4 --streams 4 --size 128M /dev/zram0 && swapon
# zramctl --output-all
# free -h
I could write file to /mnt/zram/ but i see no zram usage:
# df -h /mnt/zram /mnt/tmpfs
# zramctl
# free -h
and test the speed:
ioping -R /mnt/zram
-R, -rapid test with rapid I/O during 3s (-q -i 0 -w 3)
vs regular tmpfs:
# fdisk -x /dev/zram0
# lsblk -a
Do you have idea on /etc/fstab line to mount zram based filesystem please?
one Kodi plugin downloads repeatedly a file worth tens of MB data and to save SDHC card writing cycles, I have found using "df -Th", that only tmpfs (RAM based filesystem per my understanding) is /dev and /run
# df -Th|grep tmpfs
Code:
devtmpfs       devtmpfs  1.6G   71M  1.5G   5% /dev
none           tmpfs     360M  748K  359M   1% /run
none           tmpfs     4.0K     0  4.0K   0% /sys/fs/cgroupI may utilize /dev/bigfile.name if i "chown xbian /dev", but since it may not be good practice, but /run seems to have already writing permissions for all users "Access: (1777/drwxrwxrwt)". Beside that I was able to do:
Code:
mkdir -p /mnt/tmpfs
echo "tmpfs /mnt/tmpfs tmpfs size=100M,mode=0755,uid=xbian 0 0" >> /etc/fstab
mount -aYet someone suggested me to use zram with lz4 compression (claiming to be better compression than Xbian default lzo-rle) to save the space. Though i have doubts about this approach since zram is slower than regular tmpfs and if tmps/memory pressure is high, it will then move to zram swap per my understanding). But anyway if i want to do it, i tried these steps (failed):
# zramctl --output-all
Code:
NAME       DISKSIZE DATA COMPR ALGORITHM STREAMS ZERO-PAGES TOTAL MEM-LIMIT MEM-USED MIGRATED MOUNTPOINT
/dev/zram0     128M   4K   73B lzo-rle         4          0    4K        0B       4K       0B [SWAP]# mount -t ext4 /dev/zram0 /mnt/zram
Code:
mount: /mnt/zram: /dev/zram0 already mounted or mount point busy.# mount -t tmpfs /dev/zram0 /mnt/zram
# df -h
Code:
/dev/zram0      1.8G     0  1.8G   0% /mnt/zram
tmpfs           100M   26M   75M  26% /mnt/tmpfsI am able to mount it automatically using /etc/fstab line:
Code:
/dev/zram0 /mnt/zram tmpfs size=100M,mode=0755,uid=xbian 0 0I could do:
# swapoff -a;zramctl --algorithm lz4 --streams 4 --size 128M /dev/zram0 && swapon
# zramctl --output-all
Code:
NAME       DISKSIZE DATA COMPR ALGORITHM STREAMS ZERO-PAGES TOTAL MEM-LIMIT MEM-USED MIGRATED MOUNTPOINT
/dev/zram0     128M   0B    0B lz4             4          0    0B        0B       0B       0BCode:
Swap:             0B          0B          0BI could write file to /mnt/zram/ but i see no zram usage:
# df -h /mnt/zram /mnt/tmpfs
Code:
Filesystem      Size  Used Avail Use% Mounted on
/dev/zram0      100M   26M   75M  26% /mnt/zram
tmpfs           100M   26M   75M  26% /mnt/tmpfs# zramctl
Code:
NAME       ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lz4           128M   0B    0B    0B       4# free -h
Code:
total        used        free      shared  buff/cache   available
Mem:           3.5Gi       330Mi       2.4Gi       160Mi       808Mi       2.8Gi
Swap:             0B          0B          0Band test the speed:
ioping -R /mnt/zram
-R, -rapid test with rapid I/O during 3s (-q -i 0 -w 3)
Code:
--- /mnt/zram (tmpfs /dev/zram0 100 MiB) ioping statistics ---
311.8 k requests completed in 2.45 s, 1.19 GiB read, 127.1 k iops, 496.6 MiB/s
generated 311.8 k requests in 3.00 s, 1.19 GiB, 103.9 k iops, 405.9 MiB/s
min/avg/max/mdev = 6.41 us / 7.87 us / 1.78 ms / 6.88 usvs regular tmpfs:
Code:
--- /mnt/tmpfs (tmpfs tmpfs 100 MiB) ioping statistics ---
314.2 k requests completed in 2.46 s, 1.20 GiB read, 127.9 k iops, 499.6 MiB/s
generated 314.2 k requests in 3.00 s, 1.20 GiB, 104.7 k iops, 409.2 MiB/s
min/avg/max/mdev = 6.39 us / 7.82 us / 1.36 ms / 4.46 us# fdisk -x /dev/zram0
Code:
Disk /dev/zram0: 128 MiB, 134217728 bytes, 32768 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes# lsblk -a
Code:
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
ram0          1:0    0    4M  0 disk 
ram1          1:1    0    4M  0 disk 
...
zram0       254:0    0  128M  0 disk /mnt/zram
zram1       254:1    0    0B  0 disk 
zram2       254:2    0    0B  0 disk 
zram3       254:3    0    0B  0 diskDo you have idea on /etc/fstab line to mount zram based filesystem please?