files created then deleted at every second in tmp directory Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How do I check system health?How is the /tmp directory cleaned up?How do I encrypt my /tmp directory?Accidentally deleted tmp folderCorrupt /tmp directoryfound maas user directory in tmp?what directory is the tmp file inCan I delete /var/tmp/mkinitramfs-* files?Is it possible to delete files in the /tmp directory once a certain directory size is exceeded?Systemd files appearing in /tmp folderCan I delete cnijpwgtmp files in /var/tmp?

How does the math work when buying airline miles?

Is it fair for a professor to grade us on the possession of past papers?

How to react to hostile behavior from a senior developer?

What is GELU activation?

Is grep documentation about ignoring case wrong, since it doesn't ignore case in filenames?

Do I really need to have a message in a novel to appeal to readers?

Where are Serre’s lectures at Collège de France to be found?

An adverb for when you're not exaggerating

Why wasn't DOSKEY integrated with COMMAND.COM?

Time to Settle Down!

Configuring of MKS_GEN_L V1.0

NumericArray versus PackedArray in MMA12

Benefits of using sObject.clone versus creating a new record

Using et al. for a last / senior author rather than for a first author

Significance of Cersei's obsession with elephants?

Denied boarding although I have proper visa and documentation. To whom should I make a complaint?

Can you use the Shield Master feat to shove someone before you make an attack by using a Readied action?

Can anything be seen from the center of the Boötes void? How dark would it be?

Can family of EU Blue Card holder travel freely in the Schengen Area with a German Aufenthaltstitel?

How do living politicians protect their readily obtainable signatures from misuse?

Density character of a metric space is an Ulam number

How do I find out the mythology and history of my Fortress?

Is it possible to add Lighting Web Component in the Visual force Page?

Amount of permutations on an NxNxN Rubik's Cube



files created then deleted at every second in tmp directory



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)How do I check system health?How is the /tmp directory cleaned up?How do I encrypt my /tmp directory?Accidentally deleted tmp folderCorrupt /tmp directoryfound maas user directory in tmp?what directory is the tmp file inCan I delete /var/tmp/mkinitramfs-* files?Is it possible to delete files in the /tmp directory once a certain directory size is exceeded?Systemd files appearing in /tmp folderCan I delete cnijpwgtmp files in /var/tmp?



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








11















By mistake I noticed that in /tmp directory are continuously created some files then immediately deleted. Using a succession of ls -l /tmp I managed to catch the created files:



-rw------- 1 root root 0 Apr 2 19:37 YlOmPA069G
-rw------- 1 root root 0 Apr 2 19:37 l74jZzbcs6


or another example:



-rw------- 1 root root 0 Apr 2 19:44 AwVhWakvQ_
-rw------- 1 root root 0 Apr 2 19:44 RpRGl__cIM
-rw------- 1 root root 0 Apr 2 19:44 S0e72nkpBl
-rw------- 1 root root 0 Apr 2 19:44 emxIQQMSy2


It's about Ubuntu 18.10 with 4.18.0-16-generic. This is an almost fresh install: I added some server software (nginx, mysql, php7.2-fpm) but even with those closed the problem persists.



What are the files created and why?
How would I stop this behaviour? a very undesirable one on a SSD



Thank you!



UPDATE



The question is about when not having /tmp in RAM (no tmpfs).

The guilty software is x2goserver.service otherwise a must have one.










share|improve this question



















  • 2





    "a very undesirable one on a SSD" explain this please? You don't have /tmp as a tmpfs? why not? why would files in memory damage a ssd?

    – Rinzwind
    Apr 2 at 16:54






  • 2





    /tmp may not necessarily be tmpfs, so it's a valid question

    – Colin Ian King
    Apr 2 at 16:56






  • 2





    Yes, it would be undesirable on a SSD, at least if the directory metadata actually got written back to disk instead of just staying hot in cache. This is why /tmp is normally on tmpfs (a ramdisk filesystem that uses the pagecache as its backing store); you tagged your question with the tmpfs, so your comments about SSDs seem out of place.

    – Peter Cordes
    Apr 2 at 19:07






  • 1





    great - it’s a must have

    – adrhc
    Apr 3 at 5:54






  • 2





    @PeterCordes I'm not sure that the statement "/tmp is normally on tmpfs" is valid for a normal Ubuntu user - Just using the default Ubuntu install, /tmp is on disk and the OP would need to create the appropriate fstab entries to put it into a tmpfs

    – Charles Green
    Apr 4 at 13:00


















11















By mistake I noticed that in /tmp directory are continuously created some files then immediately deleted. Using a succession of ls -l /tmp I managed to catch the created files:



-rw------- 1 root root 0 Apr 2 19:37 YlOmPA069G
-rw------- 1 root root 0 Apr 2 19:37 l74jZzbcs6


or another example:



-rw------- 1 root root 0 Apr 2 19:44 AwVhWakvQ_
-rw------- 1 root root 0 Apr 2 19:44 RpRGl__cIM
-rw------- 1 root root 0 Apr 2 19:44 S0e72nkpBl
-rw------- 1 root root 0 Apr 2 19:44 emxIQQMSy2


It's about Ubuntu 18.10 with 4.18.0-16-generic. This is an almost fresh install: I added some server software (nginx, mysql, php7.2-fpm) but even with those closed the problem persists.



What are the files created and why?
How would I stop this behaviour? a very undesirable one on a SSD



Thank you!



UPDATE



The question is about when not having /tmp in RAM (no tmpfs).

The guilty software is x2goserver.service otherwise a must have one.










share|improve this question



















  • 2





    "a very undesirable one on a SSD" explain this please? You don't have /tmp as a tmpfs? why not? why would files in memory damage a ssd?

    – Rinzwind
    Apr 2 at 16:54






  • 2





    /tmp may not necessarily be tmpfs, so it's a valid question

    – Colin Ian King
    Apr 2 at 16:56






  • 2





    Yes, it would be undesirable on a SSD, at least if the directory metadata actually got written back to disk instead of just staying hot in cache. This is why /tmp is normally on tmpfs (a ramdisk filesystem that uses the pagecache as its backing store); you tagged your question with the tmpfs, so your comments about SSDs seem out of place.

    – Peter Cordes
    Apr 2 at 19:07






  • 1





    great - it’s a must have

    – adrhc
    Apr 3 at 5:54






  • 2





    @PeterCordes I'm not sure that the statement "/tmp is normally on tmpfs" is valid for a normal Ubuntu user - Just using the default Ubuntu install, /tmp is on disk and the OP would need to create the appropriate fstab entries to put it into a tmpfs

    – Charles Green
    Apr 4 at 13:00














11












11








11


6






By mistake I noticed that in /tmp directory are continuously created some files then immediately deleted. Using a succession of ls -l /tmp I managed to catch the created files:



-rw------- 1 root root 0 Apr 2 19:37 YlOmPA069G
-rw------- 1 root root 0 Apr 2 19:37 l74jZzbcs6


or another example:



-rw------- 1 root root 0 Apr 2 19:44 AwVhWakvQ_
-rw------- 1 root root 0 Apr 2 19:44 RpRGl__cIM
-rw------- 1 root root 0 Apr 2 19:44 S0e72nkpBl
-rw------- 1 root root 0 Apr 2 19:44 emxIQQMSy2


It's about Ubuntu 18.10 with 4.18.0-16-generic. This is an almost fresh install: I added some server software (nginx, mysql, php7.2-fpm) but even with those closed the problem persists.



What are the files created and why?
How would I stop this behaviour? a very undesirable one on a SSD



Thank you!



UPDATE



The question is about when not having /tmp in RAM (no tmpfs).

The guilty software is x2goserver.service otherwise a must have one.










share|improve this question
















By mistake I noticed that in /tmp directory are continuously created some files then immediately deleted. Using a succession of ls -l /tmp I managed to catch the created files:



-rw------- 1 root root 0 Apr 2 19:37 YlOmPA069G
-rw------- 1 root root 0 Apr 2 19:37 l74jZzbcs6


or another example:



-rw------- 1 root root 0 Apr 2 19:44 AwVhWakvQ_
-rw------- 1 root root 0 Apr 2 19:44 RpRGl__cIM
-rw------- 1 root root 0 Apr 2 19:44 S0e72nkpBl
-rw------- 1 root root 0 Apr 2 19:44 emxIQQMSy2


It's about Ubuntu 18.10 with 4.18.0-16-generic. This is an almost fresh install: I added some server software (nginx, mysql, php7.2-fpm) but even with those closed the problem persists.



What are the files created and why?
How would I stop this behaviour? a very undesirable one on a SSD



Thank you!



UPDATE



The question is about when not having /tmp in RAM (no tmpfs).

The guilty software is x2goserver.service otherwise a must have one.







files tmp






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 4 at 12:55







adrhc

















asked Apr 2 at 16:43









adrhcadrhc

18019




18019







  • 2





    "a very undesirable one on a SSD" explain this please? You don't have /tmp as a tmpfs? why not? why would files in memory damage a ssd?

    – Rinzwind
    Apr 2 at 16:54






  • 2





    /tmp may not necessarily be tmpfs, so it's a valid question

    – Colin Ian King
    Apr 2 at 16:56






  • 2





    Yes, it would be undesirable on a SSD, at least if the directory metadata actually got written back to disk instead of just staying hot in cache. This is why /tmp is normally on tmpfs (a ramdisk filesystem that uses the pagecache as its backing store); you tagged your question with the tmpfs, so your comments about SSDs seem out of place.

    – Peter Cordes
    Apr 2 at 19:07






  • 1





    great - it’s a must have

    – adrhc
    Apr 3 at 5:54






  • 2





    @PeterCordes I'm not sure that the statement "/tmp is normally on tmpfs" is valid for a normal Ubuntu user - Just using the default Ubuntu install, /tmp is on disk and the OP would need to create the appropriate fstab entries to put it into a tmpfs

    – Charles Green
    Apr 4 at 13:00













  • 2





    "a very undesirable one on a SSD" explain this please? You don't have /tmp as a tmpfs? why not? why would files in memory damage a ssd?

    – Rinzwind
    Apr 2 at 16:54






  • 2





    /tmp may not necessarily be tmpfs, so it's a valid question

    – Colin Ian King
    Apr 2 at 16:56






  • 2





    Yes, it would be undesirable on a SSD, at least if the directory metadata actually got written back to disk instead of just staying hot in cache. This is why /tmp is normally on tmpfs (a ramdisk filesystem that uses the pagecache as its backing store); you tagged your question with the tmpfs, so your comments about SSDs seem out of place.

    – Peter Cordes
    Apr 2 at 19:07






  • 1





    great - it’s a must have

    – adrhc
    Apr 3 at 5:54






  • 2





    @PeterCordes I'm not sure that the statement "/tmp is normally on tmpfs" is valid for a normal Ubuntu user - Just using the default Ubuntu install, /tmp is on disk and the OP would need to create the appropriate fstab entries to put it into a tmpfs

    – Charles Green
    Apr 4 at 13:00








2




2





"a very undesirable one on a SSD" explain this please? You don't have /tmp as a tmpfs? why not? why would files in memory damage a ssd?

– Rinzwind
Apr 2 at 16:54





"a very undesirable one on a SSD" explain this please? You don't have /tmp as a tmpfs? why not? why would files in memory damage a ssd?

– Rinzwind
Apr 2 at 16:54




2




2





/tmp may not necessarily be tmpfs, so it's a valid question

– Colin Ian King
Apr 2 at 16:56





/tmp may not necessarily be tmpfs, so it's a valid question

– Colin Ian King
Apr 2 at 16:56




2




2





Yes, it would be undesirable on a SSD, at least if the directory metadata actually got written back to disk instead of just staying hot in cache. This is why /tmp is normally on tmpfs (a ramdisk filesystem that uses the pagecache as its backing store); you tagged your question with the tmpfs, so your comments about SSDs seem out of place.

– Peter Cordes
Apr 2 at 19:07





Yes, it would be undesirable on a SSD, at least if the directory metadata actually got written back to disk instead of just staying hot in cache. This is why /tmp is normally on tmpfs (a ramdisk filesystem that uses the pagecache as its backing store); you tagged your question with the tmpfs, so your comments about SSDs seem out of place.

– Peter Cordes
Apr 2 at 19:07




1




1





great - it’s a must have

– adrhc
Apr 3 at 5:54





great - it’s a must have

– adrhc
Apr 3 at 5:54




2




2





@PeterCordes I'm not sure that the statement "/tmp is normally on tmpfs" is valid for a normal Ubuntu user - Just using the default Ubuntu install, /tmp is on disk and the OP would need to create the appropriate fstab entries to put it into a tmpfs

– Charles Green
Apr 4 at 13:00






@PeterCordes I'm not sure that the statement "/tmp is normally on tmpfs" is valid for a normal Ubuntu user - Just using the default Ubuntu install, /tmp is on disk and the OP would need to create the appropriate fstab entries to put it into a tmpfs

– Charles Green
Apr 4 at 13:00











5 Answers
5






active

oldest

votes


















15














I suggest installing and running fnotifystat to detect the process that is creating these files:



sudo apt-get install fnotifystat
sudo fnotifystat -i /tmp


You will see process that is doing the open/close/read/write activity something like the following:



Total Open Close Read Write PID Process Pathname
3.0 1.0 1.0 0.0 1.0 5748 firefox /tmp/cubeb-shm-5748-input (deleted)
2.0 0.0 1.0 0.0 1.0 18135 firefox /tmp/cubeb-shm-5748-output (deleted)
1.0 1.0 0.0 0.0 0.0 5748 firefox /tmp/cubeb-shm-5748-output (deleted)





share|improve this answer


















  • 2





    Postscript: I'm the author of this tool: kernel.ubuntu.com/~cking/fnotifystat

    – Colin Ian King
    Apr 3 at 8:12






  • 1





    And you are also the first who answered the question (though no longer visible that). It's a good tool by the way.

    – adrhc
    Apr 3 at 13:51












  • +1 for a very handy utility. Timely too as I can use it to monitor my next project of creating /tmp/... files for IPC between daemon and user space instead of more complicated DBUS.

    – WinEunuuchs2Unix
    Apr 10 at 1:41


















8














Determine which program/process is touching files



You can use tools such as lsof to determine which processes and binaries are touching/opening which files. This could become troublesome if the files change frequently, so you can instead set up a watch to notify you:



$ sudo fnotifystat -i /tmp


Sometimes, simply looking at the user or group owner gives you a good hint (ie: ls -lsha).




Put /tmp into RAM instead of disk



If you desire, you can put your /tmp directory into RAM. You will have to determine if this is a smart move based on available RAM, as well as the size and frequency of read/writes.



$ sudo vim /etc/fstab

...
# tmpfs in RAM
tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0
...


$ sudo mount /tmp
$ mount | grep tmp # Check /tmp is in RAM
tmpfs on /tmp type tmpfs (rw,noatime)


If you have enough RAM, this can be considered a very good thing to do for both the longevity of your SSD, as well as the speed of your system. You can even accomplish this with smaller amounts of RAM if you tweak tmpreaper (sometimes tmpwatch) to be more aggressive.






share|improve this answer
































    5















    very undesirable one on a SSD




    You tagged your question with tmpfs, so it is not quite clear to me how this relates to SSD at all. Tmpfs is an in-memory (or more precisely, in-block-cache) filesystem, so it will never hit a physical disk.



    Furthermore, even if you had a physical backing store for your /tmp filesystem, unless you have a system with only a couple of kilobytes of RAM, those short-lived files will never hit the disk, all operations will happen in the cache.



    So, in other words, there is nothing to worry about since you are using tmpfs, and if you weren't, there still would be nothing to worry about.






    share|improve this answer























    • I keep the /tmp in RAM so by mistake I tagged also with my current fs type (tmpfs). I removed it now but I find you're answer useful too so 1 up from me.

      – adrhc
      Apr 3 at 16:23












    • @adrhc: If your /tmp is in RAM, then it has nothing whatsoever to do with your SSD, so it is neither desirable nor undesirable but actually completely unrelated.

      – Jörg W Mittag
      Apr 3 at 21:36











    • I agree but the question is about when not having /tmp in RAM. It just happened that I had /tmp in RAM; still, the problem intrigued me.

      – adrhc
      Apr 4 at 12:55


















    0














    People worry too much about SSD write endurance. Assuming that creating and deleting an empty file writes 24 kB every second, and using the 150 TBW spec for the popular Samsung 860 EVO 250 GB, wear-out takes 193 years!



    (150 * 10 ^ 12) / ((2 * 3 * 4 * 1024) * 60 * 60 * 24 * 365.25) = 193



    For ext4 filesystems, use "tune2fs -l" to find Lifetime writes. Or, use "smartctl -a" and look for Total_LBAs_Written. I always find the SSD has lots of life left.






    share|improve this answer























    • The question is "What are the files created and why? How would I stop this behaviour?", how does your "answer" fit to the question?

      – bummi
      Apr 9 at 17:39











    • Though not directly answering the question I find this information useful too though not very precise related to how to use those commands. E.g. with tune2fs I get tune2fs: Bad magic number in super-block while trying to open /dev/nvme0n1 Found a gpt partition table in /dev/nvme0n1.

      – adrhc
      Apr 9 at 18:34



















    0














    You were using the wrong /dev/nvme0... name:



    $ sudo tune2fs -l /dev/nvme0n1
    tune2fs 1.42.13 (17-May-2015)
    tune2fs: Bad magic number in super-block while trying to open /dev/nvme0n1
    Couldn't find valid filesystem superblock.


    The right format is:



    $ sudo tune2fs -l /dev/nvme0n1p6
    tune2fs 1.42.13 (17-May-2015)
    Filesystem volume name: New_Ubuntu_16.04
    Last mounted on: /
    Filesystem UUID: b40b3925-70ef-447f-923e-1b05467c00e7
    Filesystem magic number: 0xEF53
    Filesystem revision #: 1 (dynamic)
    Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
    Filesystem flags: signed_directory_hash
    Default mount options: user_xattr acl
    Filesystem state: clean
    Errors behavior: Continue
    Filesystem OS type: Linux
    Inode count: 2953920
    Block count: 11829504
    Reserved block count: 534012
    Free blocks: 6883701
    Free inodes: 2277641
    First block: 0
    Block size: 4096
    Fragment size: 4096
    Reserved GDT blocks: 1021
    Blocks per group: 32768
    Fragments per group: 32768
    Inodes per group: 8160
    Inode blocks per group: 510
    Flex block group size: 16
    Filesystem created: Thu Aug 2 20:14:59 2018
    Last mount time: Thu Apr 4 21:05:29 2019
    Last write time: Thu Feb 14 21:36:27 2019
    Mount count: 377
    Maximum mount count: -1
    Last checked: Thu Aug 2 20:14:59 2018
    Check interval: 0 (<none>)
    Lifetime writes: 4920 GB
    Reserved blocks uid: 0 (user root)
    Reserved blocks gid: 0 (group root)
    First inode: 11
    Inode size: 256
    Required extra isize: 28
    Desired extra isize: 28
    Journal inode: 8
    First orphan inode: 1308352
    Default directory hash: half_md4
    Directory Hash Seed: a179d56c-6c68-468c-8070-ffa5bb7cd973
    Journal backup: inode blocks


    As far as lifetime of NVMe SSD goes:



    $ sudo nvme smart-log /dev/nvme0
    Smart Log for NVME device:nvme0 namespace-id:ffffffff
    critical_warning : 0
    temperature : 38 C
    available_spare : 100%
    available_spare_threshold : 10%
    percentage_used : 0%
    data_units_read : 22,351,778
    data_units_written : 14,667,833
    host_read_commands : 379,349,109
    host_write_commands : 127,359,479
    controller_busy_time : 952
    power_cycles : 1,925
    power_on_hours : 1,016
    unsafe_shutdowns : 113
    media_errors : 0
    num_err_log_entries : 598
    Warning Temperature Time : 0
    Critical Composite Temperature Time : 0
    Temperature Sensor 1 : 38 C
    Temperature Sensor 2 : 49 C
    Temperature Sensor 3 : 0 C
    Temperature Sensor 4 : 0 C
    Temperature Sensor 5 : 0 C
    Temperature Sensor 6 : 0 C
    Temperature Sensor 7 : 0 C
    Temperature Sensor 8 : 0 C


    The key line here is:



    percentage_used : 0%


    After 18 months of use the SSD percentage use is 0%. If after 3 years of use it hits 1% then I know the SSD will last 300 years.



    Obviously this answer would not fit into comment section to reply to other comments.






    share|improve this answer























    • What part from the tune2fs output relates to the SSD's life time?

      – adrhc
      Apr 10 at 6:10











    • @adrhc I was showing the correct way of calling tune2fs in response to your comment on Fraser Gunn's answer showing an error message.

      – WinEunuuchs2Unix
      Apr 10 at 10:42












    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "89"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1130673%2ffiles-created-then-deleted-at-every-second-in-tmp-directory%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    5 Answers
    5






    active

    oldest

    votes








    5 Answers
    5






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    15














    I suggest installing and running fnotifystat to detect the process that is creating these files:



    sudo apt-get install fnotifystat
    sudo fnotifystat -i /tmp


    You will see process that is doing the open/close/read/write activity something like the following:



    Total Open Close Read Write PID Process Pathname
    3.0 1.0 1.0 0.0 1.0 5748 firefox /tmp/cubeb-shm-5748-input (deleted)
    2.0 0.0 1.0 0.0 1.0 18135 firefox /tmp/cubeb-shm-5748-output (deleted)
    1.0 1.0 0.0 0.0 0.0 5748 firefox /tmp/cubeb-shm-5748-output (deleted)





    share|improve this answer


















    • 2





      Postscript: I'm the author of this tool: kernel.ubuntu.com/~cking/fnotifystat

      – Colin Ian King
      Apr 3 at 8:12






    • 1





      And you are also the first who answered the question (though no longer visible that). It's a good tool by the way.

      – adrhc
      Apr 3 at 13:51












    • +1 for a very handy utility. Timely too as I can use it to monitor my next project of creating /tmp/... files for IPC between daemon and user space instead of more complicated DBUS.

      – WinEunuuchs2Unix
      Apr 10 at 1:41















    15














    I suggest installing and running fnotifystat to detect the process that is creating these files:



    sudo apt-get install fnotifystat
    sudo fnotifystat -i /tmp


    You will see process that is doing the open/close/read/write activity something like the following:



    Total Open Close Read Write PID Process Pathname
    3.0 1.0 1.0 0.0 1.0 5748 firefox /tmp/cubeb-shm-5748-input (deleted)
    2.0 0.0 1.0 0.0 1.0 18135 firefox /tmp/cubeb-shm-5748-output (deleted)
    1.0 1.0 0.0 0.0 0.0 5748 firefox /tmp/cubeb-shm-5748-output (deleted)





    share|improve this answer


















    • 2





      Postscript: I'm the author of this tool: kernel.ubuntu.com/~cking/fnotifystat

      – Colin Ian King
      Apr 3 at 8:12






    • 1





      And you are also the first who answered the question (though no longer visible that). It's a good tool by the way.

      – adrhc
      Apr 3 at 13:51












    • +1 for a very handy utility. Timely too as I can use it to monitor my next project of creating /tmp/... files for IPC between daemon and user space instead of more complicated DBUS.

      – WinEunuuchs2Unix
      Apr 10 at 1:41













    15












    15








    15







    I suggest installing and running fnotifystat to detect the process that is creating these files:



    sudo apt-get install fnotifystat
    sudo fnotifystat -i /tmp


    You will see process that is doing the open/close/read/write activity something like the following:



    Total Open Close Read Write PID Process Pathname
    3.0 1.0 1.0 0.0 1.0 5748 firefox /tmp/cubeb-shm-5748-input (deleted)
    2.0 0.0 1.0 0.0 1.0 18135 firefox /tmp/cubeb-shm-5748-output (deleted)
    1.0 1.0 0.0 0.0 0.0 5748 firefox /tmp/cubeb-shm-5748-output (deleted)





    share|improve this answer













    I suggest installing and running fnotifystat to detect the process that is creating these files:



    sudo apt-get install fnotifystat
    sudo fnotifystat -i /tmp


    You will see process that is doing the open/close/read/write activity something like the following:



    Total Open Close Read Write PID Process Pathname
    3.0 1.0 1.0 0.0 1.0 5748 firefox /tmp/cubeb-shm-5748-input (deleted)
    2.0 0.0 1.0 0.0 1.0 18135 firefox /tmp/cubeb-shm-5748-output (deleted)
    1.0 1.0 0.0 0.0 0.0 5748 firefox /tmp/cubeb-shm-5748-output (deleted)






    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Apr 2 at 16:55









    Colin Ian KingColin Ian King

    12.6k13848




    12.6k13848







    • 2





      Postscript: I'm the author of this tool: kernel.ubuntu.com/~cking/fnotifystat

      – Colin Ian King
      Apr 3 at 8:12






    • 1





      And you are also the first who answered the question (though no longer visible that). It's a good tool by the way.

      – adrhc
      Apr 3 at 13:51












    • +1 for a very handy utility. Timely too as I can use it to monitor my next project of creating /tmp/... files for IPC between daemon and user space instead of more complicated DBUS.

      – WinEunuuchs2Unix
      Apr 10 at 1:41












    • 2





      Postscript: I'm the author of this tool: kernel.ubuntu.com/~cking/fnotifystat

      – Colin Ian King
      Apr 3 at 8:12






    • 1





      And you are also the first who answered the question (though no longer visible that). It's a good tool by the way.

      – adrhc
      Apr 3 at 13:51












    • +1 for a very handy utility. Timely too as I can use it to monitor my next project of creating /tmp/... files for IPC between daemon and user space instead of more complicated DBUS.

      – WinEunuuchs2Unix
      Apr 10 at 1:41







    2




    2





    Postscript: I'm the author of this tool: kernel.ubuntu.com/~cking/fnotifystat

    – Colin Ian King
    Apr 3 at 8:12





    Postscript: I'm the author of this tool: kernel.ubuntu.com/~cking/fnotifystat

    – Colin Ian King
    Apr 3 at 8:12




    1




    1





    And you are also the first who answered the question (though no longer visible that). It's a good tool by the way.

    – adrhc
    Apr 3 at 13:51






    And you are also the first who answered the question (though no longer visible that). It's a good tool by the way.

    – adrhc
    Apr 3 at 13:51














    +1 for a very handy utility. Timely too as I can use it to monitor my next project of creating /tmp/... files for IPC between daemon and user space instead of more complicated DBUS.

    – WinEunuuchs2Unix
    Apr 10 at 1:41





    +1 for a very handy utility. Timely too as I can use it to monitor my next project of creating /tmp/... files for IPC between daemon and user space instead of more complicated DBUS.

    – WinEunuuchs2Unix
    Apr 10 at 1:41













    8














    Determine which program/process is touching files



    You can use tools such as lsof to determine which processes and binaries are touching/opening which files. This could become troublesome if the files change frequently, so you can instead set up a watch to notify you:



    $ sudo fnotifystat -i /tmp


    Sometimes, simply looking at the user or group owner gives you a good hint (ie: ls -lsha).




    Put /tmp into RAM instead of disk



    If you desire, you can put your /tmp directory into RAM. You will have to determine if this is a smart move based on available RAM, as well as the size and frequency of read/writes.



    $ sudo vim /etc/fstab

    ...
    # tmpfs in RAM
    tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0
    ...


    $ sudo mount /tmp
    $ mount | grep tmp # Check /tmp is in RAM
    tmpfs on /tmp type tmpfs (rw,noatime)


    If you have enough RAM, this can be considered a very good thing to do for both the longevity of your SSD, as well as the speed of your system. You can even accomplish this with smaller amounts of RAM if you tweak tmpreaper (sometimes tmpwatch) to be more aggressive.






    share|improve this answer





























      8














      Determine which program/process is touching files



      You can use tools such as lsof to determine which processes and binaries are touching/opening which files. This could become troublesome if the files change frequently, so you can instead set up a watch to notify you:



      $ sudo fnotifystat -i /tmp


      Sometimes, simply looking at the user or group owner gives you a good hint (ie: ls -lsha).




      Put /tmp into RAM instead of disk



      If you desire, you can put your /tmp directory into RAM. You will have to determine if this is a smart move based on available RAM, as well as the size and frequency of read/writes.



      $ sudo vim /etc/fstab

      ...
      # tmpfs in RAM
      tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0
      ...


      $ sudo mount /tmp
      $ mount | grep tmp # Check /tmp is in RAM
      tmpfs on /tmp type tmpfs (rw,noatime)


      If you have enough RAM, this can be considered a very good thing to do for both the longevity of your SSD, as well as the speed of your system. You can even accomplish this with smaller amounts of RAM if you tweak tmpreaper (sometimes tmpwatch) to be more aggressive.






      share|improve this answer



























        8












        8








        8







        Determine which program/process is touching files



        You can use tools such as lsof to determine which processes and binaries are touching/opening which files. This could become troublesome if the files change frequently, so you can instead set up a watch to notify you:



        $ sudo fnotifystat -i /tmp


        Sometimes, simply looking at the user or group owner gives you a good hint (ie: ls -lsha).




        Put /tmp into RAM instead of disk



        If you desire, you can put your /tmp directory into RAM. You will have to determine if this is a smart move based on available RAM, as well as the size and frequency of read/writes.



        $ sudo vim /etc/fstab

        ...
        # tmpfs in RAM
        tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0
        ...


        $ sudo mount /tmp
        $ mount | grep tmp # Check /tmp is in RAM
        tmpfs on /tmp type tmpfs (rw,noatime)


        If you have enough RAM, this can be considered a very good thing to do for both the longevity of your SSD, as well as the speed of your system. You can even accomplish this with smaller amounts of RAM if you tweak tmpreaper (sometimes tmpwatch) to be more aggressive.






        share|improve this answer















        Determine which program/process is touching files



        You can use tools such as lsof to determine which processes and binaries are touching/opening which files. This could become troublesome if the files change frequently, so you can instead set up a watch to notify you:



        $ sudo fnotifystat -i /tmp


        Sometimes, simply looking at the user or group owner gives you a good hint (ie: ls -lsha).




        Put /tmp into RAM instead of disk



        If you desire, you can put your /tmp directory into RAM. You will have to determine if this is a smart move based on available RAM, as well as the size and frequency of read/writes.



        $ sudo vim /etc/fstab

        ...
        # tmpfs in RAM
        tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0
        ...


        $ sudo mount /tmp
        $ mount | grep tmp # Check /tmp is in RAM
        tmpfs on /tmp type tmpfs (rw,noatime)


        If you have enough RAM, this can be considered a very good thing to do for both the longevity of your SSD, as well as the speed of your system. You can even accomplish this with smaller amounts of RAM if you tweak tmpreaper (sometimes tmpwatch) to be more aggressive.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Apr 2 at 17:02

























        answered Apr 2 at 16:57









        earthmeLonearthmeLon

        6,7181951




        6,7181951





















            5















            very undesirable one on a SSD




            You tagged your question with tmpfs, so it is not quite clear to me how this relates to SSD at all. Tmpfs is an in-memory (or more precisely, in-block-cache) filesystem, so it will never hit a physical disk.



            Furthermore, even if you had a physical backing store for your /tmp filesystem, unless you have a system with only a couple of kilobytes of RAM, those short-lived files will never hit the disk, all operations will happen in the cache.



            So, in other words, there is nothing to worry about since you are using tmpfs, and if you weren't, there still would be nothing to worry about.






            share|improve this answer























            • I keep the /tmp in RAM so by mistake I tagged also with my current fs type (tmpfs). I removed it now but I find you're answer useful too so 1 up from me.

              – adrhc
              Apr 3 at 16:23












            • @adrhc: If your /tmp is in RAM, then it has nothing whatsoever to do with your SSD, so it is neither desirable nor undesirable but actually completely unrelated.

              – Jörg W Mittag
              Apr 3 at 21:36











            • I agree but the question is about when not having /tmp in RAM. It just happened that I had /tmp in RAM; still, the problem intrigued me.

              – adrhc
              Apr 4 at 12:55















            5















            very undesirable one on a SSD




            You tagged your question with tmpfs, so it is not quite clear to me how this relates to SSD at all. Tmpfs is an in-memory (or more precisely, in-block-cache) filesystem, so it will never hit a physical disk.



            Furthermore, even if you had a physical backing store for your /tmp filesystem, unless you have a system with only a couple of kilobytes of RAM, those short-lived files will never hit the disk, all operations will happen in the cache.



            So, in other words, there is nothing to worry about since you are using tmpfs, and if you weren't, there still would be nothing to worry about.






            share|improve this answer























            • I keep the /tmp in RAM so by mistake I tagged also with my current fs type (tmpfs). I removed it now but I find you're answer useful too so 1 up from me.

              – adrhc
              Apr 3 at 16:23












            • @adrhc: If your /tmp is in RAM, then it has nothing whatsoever to do with your SSD, so it is neither desirable nor undesirable but actually completely unrelated.

              – Jörg W Mittag
              Apr 3 at 21:36











            • I agree but the question is about when not having /tmp in RAM. It just happened that I had /tmp in RAM; still, the problem intrigued me.

              – adrhc
              Apr 4 at 12:55













            5












            5








            5








            very undesirable one on a SSD




            You tagged your question with tmpfs, so it is not quite clear to me how this relates to SSD at all. Tmpfs is an in-memory (or more precisely, in-block-cache) filesystem, so it will never hit a physical disk.



            Furthermore, even if you had a physical backing store for your /tmp filesystem, unless you have a system with only a couple of kilobytes of RAM, those short-lived files will never hit the disk, all operations will happen in the cache.



            So, in other words, there is nothing to worry about since you are using tmpfs, and if you weren't, there still would be nothing to worry about.






            share|improve this answer














            very undesirable one on a SSD




            You tagged your question with tmpfs, so it is not quite clear to me how this relates to SSD at all. Tmpfs is an in-memory (or more precisely, in-block-cache) filesystem, so it will never hit a physical disk.



            Furthermore, even if you had a physical backing store for your /tmp filesystem, unless you have a system with only a couple of kilobytes of RAM, those short-lived files will never hit the disk, all operations will happen in the cache.



            So, in other words, there is nothing to worry about since you are using tmpfs, and if you weren't, there still would be nothing to worry about.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 3 at 7:14









            Jörg W MittagJörg W Mittag

            1986




            1986












            • I keep the /tmp in RAM so by mistake I tagged also with my current fs type (tmpfs). I removed it now but I find you're answer useful too so 1 up from me.

              – adrhc
              Apr 3 at 16:23












            • @adrhc: If your /tmp is in RAM, then it has nothing whatsoever to do with your SSD, so it is neither desirable nor undesirable but actually completely unrelated.

              – Jörg W Mittag
              Apr 3 at 21:36











            • I agree but the question is about when not having /tmp in RAM. It just happened that I had /tmp in RAM; still, the problem intrigued me.

              – adrhc
              Apr 4 at 12:55

















            • I keep the /tmp in RAM so by mistake I tagged also with my current fs type (tmpfs). I removed it now but I find you're answer useful too so 1 up from me.

              – adrhc
              Apr 3 at 16:23












            • @adrhc: If your /tmp is in RAM, then it has nothing whatsoever to do with your SSD, so it is neither desirable nor undesirable but actually completely unrelated.

              – Jörg W Mittag
              Apr 3 at 21:36











            • I agree but the question is about when not having /tmp in RAM. It just happened that I had /tmp in RAM; still, the problem intrigued me.

              – adrhc
              Apr 4 at 12:55
















            I keep the /tmp in RAM so by mistake I tagged also with my current fs type (tmpfs). I removed it now but I find you're answer useful too so 1 up from me.

            – adrhc
            Apr 3 at 16:23






            I keep the /tmp in RAM so by mistake I tagged also with my current fs type (tmpfs). I removed it now but I find you're answer useful too so 1 up from me.

            – adrhc
            Apr 3 at 16:23














            @adrhc: If your /tmp is in RAM, then it has nothing whatsoever to do with your SSD, so it is neither desirable nor undesirable but actually completely unrelated.

            – Jörg W Mittag
            Apr 3 at 21:36





            @adrhc: If your /tmp is in RAM, then it has nothing whatsoever to do with your SSD, so it is neither desirable nor undesirable but actually completely unrelated.

            – Jörg W Mittag
            Apr 3 at 21:36













            I agree but the question is about when not having /tmp in RAM. It just happened that I had /tmp in RAM; still, the problem intrigued me.

            – adrhc
            Apr 4 at 12:55





            I agree but the question is about when not having /tmp in RAM. It just happened that I had /tmp in RAM; still, the problem intrigued me.

            – adrhc
            Apr 4 at 12:55











            0














            People worry too much about SSD write endurance. Assuming that creating and deleting an empty file writes 24 kB every second, and using the 150 TBW spec for the popular Samsung 860 EVO 250 GB, wear-out takes 193 years!



            (150 * 10 ^ 12) / ((2 * 3 * 4 * 1024) * 60 * 60 * 24 * 365.25) = 193



            For ext4 filesystems, use "tune2fs -l" to find Lifetime writes. Or, use "smartctl -a" and look for Total_LBAs_Written. I always find the SSD has lots of life left.






            share|improve this answer























            • The question is "What are the files created and why? How would I stop this behaviour?", how does your "answer" fit to the question?

              – bummi
              Apr 9 at 17:39











            • Though not directly answering the question I find this information useful too though not very precise related to how to use those commands. E.g. with tune2fs I get tune2fs: Bad magic number in super-block while trying to open /dev/nvme0n1 Found a gpt partition table in /dev/nvme0n1.

              – adrhc
              Apr 9 at 18:34
















            0














            People worry too much about SSD write endurance. Assuming that creating and deleting an empty file writes 24 kB every second, and using the 150 TBW spec for the popular Samsung 860 EVO 250 GB, wear-out takes 193 years!



            (150 * 10 ^ 12) / ((2 * 3 * 4 * 1024) * 60 * 60 * 24 * 365.25) = 193



            For ext4 filesystems, use "tune2fs -l" to find Lifetime writes. Or, use "smartctl -a" and look for Total_LBAs_Written. I always find the SSD has lots of life left.






            share|improve this answer























            • The question is "What are the files created and why? How would I stop this behaviour?", how does your "answer" fit to the question?

              – bummi
              Apr 9 at 17:39











            • Though not directly answering the question I find this information useful too though not very precise related to how to use those commands. E.g. with tune2fs I get tune2fs: Bad magic number in super-block while trying to open /dev/nvme0n1 Found a gpt partition table in /dev/nvme0n1.

              – adrhc
              Apr 9 at 18:34














            0












            0








            0







            People worry too much about SSD write endurance. Assuming that creating and deleting an empty file writes 24 kB every second, and using the 150 TBW spec for the popular Samsung 860 EVO 250 GB, wear-out takes 193 years!



            (150 * 10 ^ 12) / ((2 * 3 * 4 * 1024) * 60 * 60 * 24 * 365.25) = 193



            For ext4 filesystems, use "tune2fs -l" to find Lifetime writes. Or, use "smartctl -a" and look for Total_LBAs_Written. I always find the SSD has lots of life left.






            share|improve this answer













            People worry too much about SSD write endurance. Assuming that creating and deleting an empty file writes 24 kB every second, and using the 150 TBW spec for the popular Samsung 860 EVO 250 GB, wear-out takes 193 years!



            (150 * 10 ^ 12) / ((2 * 3 * 4 * 1024) * 60 * 60 * 24 * 365.25) = 193



            For ext4 filesystems, use "tune2fs -l" to find Lifetime writes. Or, use "smartctl -a" and look for Total_LBAs_Written. I always find the SSD has lots of life left.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 9 at 17:36









            Fraser GunnFraser Gunn

            11




            11












            • The question is "What are the files created and why? How would I stop this behaviour?", how does your "answer" fit to the question?

              – bummi
              Apr 9 at 17:39











            • Though not directly answering the question I find this information useful too though not very precise related to how to use those commands. E.g. with tune2fs I get tune2fs: Bad magic number in super-block while trying to open /dev/nvme0n1 Found a gpt partition table in /dev/nvme0n1.

              – adrhc
              Apr 9 at 18:34


















            • The question is "What are the files created and why? How would I stop this behaviour?", how does your "answer" fit to the question?

              – bummi
              Apr 9 at 17:39











            • Though not directly answering the question I find this information useful too though not very precise related to how to use those commands. E.g. with tune2fs I get tune2fs: Bad magic number in super-block while trying to open /dev/nvme0n1 Found a gpt partition table in /dev/nvme0n1.

              – adrhc
              Apr 9 at 18:34

















            The question is "What are the files created and why? How would I stop this behaviour?", how does your "answer" fit to the question?

            – bummi
            Apr 9 at 17:39





            The question is "What are the files created and why? How would I stop this behaviour?", how does your "answer" fit to the question?

            – bummi
            Apr 9 at 17:39













            Though not directly answering the question I find this information useful too though not very precise related to how to use those commands. E.g. with tune2fs I get tune2fs: Bad magic number in super-block while trying to open /dev/nvme0n1 Found a gpt partition table in /dev/nvme0n1.

            – adrhc
            Apr 9 at 18:34






            Though not directly answering the question I find this information useful too though not very precise related to how to use those commands. E.g. with tune2fs I get tune2fs: Bad magic number in super-block while trying to open /dev/nvme0n1 Found a gpt partition table in /dev/nvme0n1.

            – adrhc
            Apr 9 at 18:34












            0














            You were using the wrong /dev/nvme0... name:



            $ sudo tune2fs -l /dev/nvme0n1
            tune2fs 1.42.13 (17-May-2015)
            tune2fs: Bad magic number in super-block while trying to open /dev/nvme0n1
            Couldn't find valid filesystem superblock.


            The right format is:



            $ sudo tune2fs -l /dev/nvme0n1p6
            tune2fs 1.42.13 (17-May-2015)
            Filesystem volume name: New_Ubuntu_16.04
            Last mounted on: /
            Filesystem UUID: b40b3925-70ef-447f-923e-1b05467c00e7
            Filesystem magic number: 0xEF53
            Filesystem revision #: 1 (dynamic)
            Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
            Filesystem flags: signed_directory_hash
            Default mount options: user_xattr acl
            Filesystem state: clean
            Errors behavior: Continue
            Filesystem OS type: Linux
            Inode count: 2953920
            Block count: 11829504
            Reserved block count: 534012
            Free blocks: 6883701
            Free inodes: 2277641
            First block: 0
            Block size: 4096
            Fragment size: 4096
            Reserved GDT blocks: 1021
            Blocks per group: 32768
            Fragments per group: 32768
            Inodes per group: 8160
            Inode blocks per group: 510
            Flex block group size: 16
            Filesystem created: Thu Aug 2 20:14:59 2018
            Last mount time: Thu Apr 4 21:05:29 2019
            Last write time: Thu Feb 14 21:36:27 2019
            Mount count: 377
            Maximum mount count: -1
            Last checked: Thu Aug 2 20:14:59 2018
            Check interval: 0 (<none>)
            Lifetime writes: 4920 GB
            Reserved blocks uid: 0 (user root)
            Reserved blocks gid: 0 (group root)
            First inode: 11
            Inode size: 256
            Required extra isize: 28
            Desired extra isize: 28
            Journal inode: 8
            First orphan inode: 1308352
            Default directory hash: half_md4
            Directory Hash Seed: a179d56c-6c68-468c-8070-ffa5bb7cd973
            Journal backup: inode blocks


            As far as lifetime of NVMe SSD goes:



            $ sudo nvme smart-log /dev/nvme0
            Smart Log for NVME device:nvme0 namespace-id:ffffffff
            critical_warning : 0
            temperature : 38 C
            available_spare : 100%
            available_spare_threshold : 10%
            percentage_used : 0%
            data_units_read : 22,351,778
            data_units_written : 14,667,833
            host_read_commands : 379,349,109
            host_write_commands : 127,359,479
            controller_busy_time : 952
            power_cycles : 1,925
            power_on_hours : 1,016
            unsafe_shutdowns : 113
            media_errors : 0
            num_err_log_entries : 598
            Warning Temperature Time : 0
            Critical Composite Temperature Time : 0
            Temperature Sensor 1 : 38 C
            Temperature Sensor 2 : 49 C
            Temperature Sensor 3 : 0 C
            Temperature Sensor 4 : 0 C
            Temperature Sensor 5 : 0 C
            Temperature Sensor 6 : 0 C
            Temperature Sensor 7 : 0 C
            Temperature Sensor 8 : 0 C


            The key line here is:



            percentage_used : 0%


            After 18 months of use the SSD percentage use is 0%. If after 3 years of use it hits 1% then I know the SSD will last 300 years.



            Obviously this answer would not fit into comment section to reply to other comments.






            share|improve this answer























            • What part from the tune2fs output relates to the SSD's life time?

              – adrhc
              Apr 10 at 6:10











            • @adrhc I was showing the correct way of calling tune2fs in response to your comment on Fraser Gunn's answer showing an error message.

              – WinEunuuchs2Unix
              Apr 10 at 10:42
















            0














            You were using the wrong /dev/nvme0... name:



            $ sudo tune2fs -l /dev/nvme0n1
            tune2fs 1.42.13 (17-May-2015)
            tune2fs: Bad magic number in super-block while trying to open /dev/nvme0n1
            Couldn't find valid filesystem superblock.


            The right format is:



            $ sudo tune2fs -l /dev/nvme0n1p6
            tune2fs 1.42.13 (17-May-2015)
            Filesystem volume name: New_Ubuntu_16.04
            Last mounted on: /
            Filesystem UUID: b40b3925-70ef-447f-923e-1b05467c00e7
            Filesystem magic number: 0xEF53
            Filesystem revision #: 1 (dynamic)
            Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
            Filesystem flags: signed_directory_hash
            Default mount options: user_xattr acl
            Filesystem state: clean
            Errors behavior: Continue
            Filesystem OS type: Linux
            Inode count: 2953920
            Block count: 11829504
            Reserved block count: 534012
            Free blocks: 6883701
            Free inodes: 2277641
            First block: 0
            Block size: 4096
            Fragment size: 4096
            Reserved GDT blocks: 1021
            Blocks per group: 32768
            Fragments per group: 32768
            Inodes per group: 8160
            Inode blocks per group: 510
            Flex block group size: 16
            Filesystem created: Thu Aug 2 20:14:59 2018
            Last mount time: Thu Apr 4 21:05:29 2019
            Last write time: Thu Feb 14 21:36:27 2019
            Mount count: 377
            Maximum mount count: -1
            Last checked: Thu Aug 2 20:14:59 2018
            Check interval: 0 (<none>)
            Lifetime writes: 4920 GB
            Reserved blocks uid: 0 (user root)
            Reserved blocks gid: 0 (group root)
            First inode: 11
            Inode size: 256
            Required extra isize: 28
            Desired extra isize: 28
            Journal inode: 8
            First orphan inode: 1308352
            Default directory hash: half_md4
            Directory Hash Seed: a179d56c-6c68-468c-8070-ffa5bb7cd973
            Journal backup: inode blocks


            As far as lifetime of NVMe SSD goes:



            $ sudo nvme smart-log /dev/nvme0
            Smart Log for NVME device:nvme0 namespace-id:ffffffff
            critical_warning : 0
            temperature : 38 C
            available_spare : 100%
            available_spare_threshold : 10%
            percentage_used : 0%
            data_units_read : 22,351,778
            data_units_written : 14,667,833
            host_read_commands : 379,349,109
            host_write_commands : 127,359,479
            controller_busy_time : 952
            power_cycles : 1,925
            power_on_hours : 1,016
            unsafe_shutdowns : 113
            media_errors : 0
            num_err_log_entries : 598
            Warning Temperature Time : 0
            Critical Composite Temperature Time : 0
            Temperature Sensor 1 : 38 C
            Temperature Sensor 2 : 49 C
            Temperature Sensor 3 : 0 C
            Temperature Sensor 4 : 0 C
            Temperature Sensor 5 : 0 C
            Temperature Sensor 6 : 0 C
            Temperature Sensor 7 : 0 C
            Temperature Sensor 8 : 0 C


            The key line here is:



            percentage_used : 0%


            After 18 months of use the SSD percentage use is 0%. If after 3 years of use it hits 1% then I know the SSD will last 300 years.



            Obviously this answer would not fit into comment section to reply to other comments.






            share|improve this answer























            • What part from the tune2fs output relates to the SSD's life time?

              – adrhc
              Apr 10 at 6:10











            • @adrhc I was showing the correct way of calling tune2fs in response to your comment on Fraser Gunn's answer showing an error message.

              – WinEunuuchs2Unix
              Apr 10 at 10:42














            0












            0








            0







            You were using the wrong /dev/nvme0... name:



            $ sudo tune2fs -l /dev/nvme0n1
            tune2fs 1.42.13 (17-May-2015)
            tune2fs: Bad magic number in super-block while trying to open /dev/nvme0n1
            Couldn't find valid filesystem superblock.


            The right format is:



            $ sudo tune2fs -l /dev/nvme0n1p6
            tune2fs 1.42.13 (17-May-2015)
            Filesystem volume name: New_Ubuntu_16.04
            Last mounted on: /
            Filesystem UUID: b40b3925-70ef-447f-923e-1b05467c00e7
            Filesystem magic number: 0xEF53
            Filesystem revision #: 1 (dynamic)
            Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
            Filesystem flags: signed_directory_hash
            Default mount options: user_xattr acl
            Filesystem state: clean
            Errors behavior: Continue
            Filesystem OS type: Linux
            Inode count: 2953920
            Block count: 11829504
            Reserved block count: 534012
            Free blocks: 6883701
            Free inodes: 2277641
            First block: 0
            Block size: 4096
            Fragment size: 4096
            Reserved GDT blocks: 1021
            Blocks per group: 32768
            Fragments per group: 32768
            Inodes per group: 8160
            Inode blocks per group: 510
            Flex block group size: 16
            Filesystem created: Thu Aug 2 20:14:59 2018
            Last mount time: Thu Apr 4 21:05:29 2019
            Last write time: Thu Feb 14 21:36:27 2019
            Mount count: 377
            Maximum mount count: -1
            Last checked: Thu Aug 2 20:14:59 2018
            Check interval: 0 (<none>)
            Lifetime writes: 4920 GB
            Reserved blocks uid: 0 (user root)
            Reserved blocks gid: 0 (group root)
            First inode: 11
            Inode size: 256
            Required extra isize: 28
            Desired extra isize: 28
            Journal inode: 8
            First orphan inode: 1308352
            Default directory hash: half_md4
            Directory Hash Seed: a179d56c-6c68-468c-8070-ffa5bb7cd973
            Journal backup: inode blocks


            As far as lifetime of NVMe SSD goes:



            $ sudo nvme smart-log /dev/nvme0
            Smart Log for NVME device:nvme0 namespace-id:ffffffff
            critical_warning : 0
            temperature : 38 C
            available_spare : 100%
            available_spare_threshold : 10%
            percentage_used : 0%
            data_units_read : 22,351,778
            data_units_written : 14,667,833
            host_read_commands : 379,349,109
            host_write_commands : 127,359,479
            controller_busy_time : 952
            power_cycles : 1,925
            power_on_hours : 1,016
            unsafe_shutdowns : 113
            media_errors : 0
            num_err_log_entries : 598
            Warning Temperature Time : 0
            Critical Composite Temperature Time : 0
            Temperature Sensor 1 : 38 C
            Temperature Sensor 2 : 49 C
            Temperature Sensor 3 : 0 C
            Temperature Sensor 4 : 0 C
            Temperature Sensor 5 : 0 C
            Temperature Sensor 6 : 0 C
            Temperature Sensor 7 : 0 C
            Temperature Sensor 8 : 0 C


            The key line here is:



            percentage_used : 0%


            After 18 months of use the SSD percentage use is 0%. If after 3 years of use it hits 1% then I know the SSD will last 300 years.



            Obviously this answer would not fit into comment section to reply to other comments.






            share|improve this answer













            You were using the wrong /dev/nvme0... name:



            $ sudo tune2fs -l /dev/nvme0n1
            tune2fs 1.42.13 (17-May-2015)
            tune2fs: Bad magic number in super-block while trying to open /dev/nvme0n1
            Couldn't find valid filesystem superblock.


            The right format is:



            $ sudo tune2fs -l /dev/nvme0n1p6
            tune2fs 1.42.13 (17-May-2015)
            Filesystem volume name: New_Ubuntu_16.04
            Last mounted on: /
            Filesystem UUID: b40b3925-70ef-447f-923e-1b05467c00e7
            Filesystem magic number: 0xEF53
            Filesystem revision #: 1 (dynamic)
            Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
            Filesystem flags: signed_directory_hash
            Default mount options: user_xattr acl
            Filesystem state: clean
            Errors behavior: Continue
            Filesystem OS type: Linux
            Inode count: 2953920
            Block count: 11829504
            Reserved block count: 534012
            Free blocks: 6883701
            Free inodes: 2277641
            First block: 0
            Block size: 4096
            Fragment size: 4096
            Reserved GDT blocks: 1021
            Blocks per group: 32768
            Fragments per group: 32768
            Inodes per group: 8160
            Inode blocks per group: 510
            Flex block group size: 16
            Filesystem created: Thu Aug 2 20:14:59 2018
            Last mount time: Thu Apr 4 21:05:29 2019
            Last write time: Thu Feb 14 21:36:27 2019
            Mount count: 377
            Maximum mount count: -1
            Last checked: Thu Aug 2 20:14:59 2018
            Check interval: 0 (<none>)
            Lifetime writes: 4920 GB
            Reserved blocks uid: 0 (user root)
            Reserved blocks gid: 0 (group root)
            First inode: 11
            Inode size: 256
            Required extra isize: 28
            Desired extra isize: 28
            Journal inode: 8
            First orphan inode: 1308352
            Default directory hash: half_md4
            Directory Hash Seed: a179d56c-6c68-468c-8070-ffa5bb7cd973
            Journal backup: inode blocks


            As far as lifetime of NVMe SSD goes:



            $ sudo nvme smart-log /dev/nvme0
            Smart Log for NVME device:nvme0 namespace-id:ffffffff
            critical_warning : 0
            temperature : 38 C
            available_spare : 100%
            available_spare_threshold : 10%
            percentage_used : 0%
            data_units_read : 22,351,778
            data_units_written : 14,667,833
            host_read_commands : 379,349,109
            host_write_commands : 127,359,479
            controller_busy_time : 952
            power_cycles : 1,925
            power_on_hours : 1,016
            unsafe_shutdowns : 113
            media_errors : 0
            num_err_log_entries : 598
            Warning Temperature Time : 0
            Critical Composite Temperature Time : 0
            Temperature Sensor 1 : 38 C
            Temperature Sensor 2 : 49 C
            Temperature Sensor 3 : 0 C
            Temperature Sensor 4 : 0 C
            Temperature Sensor 5 : 0 C
            Temperature Sensor 6 : 0 C
            Temperature Sensor 7 : 0 C
            Temperature Sensor 8 : 0 C


            The key line here is:



            percentage_used : 0%


            After 18 months of use the SSD percentage use is 0%. If after 3 years of use it hits 1% then I know the SSD will last 300 years.



            Obviously this answer would not fit into comment section to reply to other comments.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Apr 10 at 1:27









            WinEunuuchs2UnixWinEunuuchs2Unix

            48.2k1196187




            48.2k1196187












            • What part from the tune2fs output relates to the SSD's life time?

              – adrhc
              Apr 10 at 6:10











            • @adrhc I was showing the correct way of calling tune2fs in response to your comment on Fraser Gunn's answer showing an error message.

              – WinEunuuchs2Unix
              Apr 10 at 10:42


















            • What part from the tune2fs output relates to the SSD's life time?

              – adrhc
              Apr 10 at 6:10











            • @adrhc I was showing the correct way of calling tune2fs in response to your comment on Fraser Gunn's answer showing an error message.

              – WinEunuuchs2Unix
              Apr 10 at 10:42

















            What part from the tune2fs output relates to the SSD's life time?

            – adrhc
            Apr 10 at 6:10





            What part from the tune2fs output relates to the SSD's life time?

            – adrhc
            Apr 10 at 6:10













            @adrhc I was showing the correct way of calling tune2fs in response to your comment on Fraser Gunn's answer showing an error message.

            – WinEunuuchs2Unix
            Apr 10 at 10:42






            @adrhc I was showing the correct way of calling tune2fs in response to your comment on Fraser Gunn's answer showing an error message.

            – WinEunuuchs2Unix
            Apr 10 at 10:42


















            draft saved

            draft discarded
















































            Thanks for contributing an answer to Ask Ubuntu!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2faskubuntu.com%2fquestions%2f1130673%2ffiles-created-then-deleted-at-every-second-in-tmp-directory%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

            Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

            Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?