For example, my Git installation folder is shown as having a size of ~600m, but then it has 129! hard links the size of ~3m each, so the real disk space used is not 129*3=387m, but just 3m
Understand that it might be out of scope for Everything, but given that it does some greate memory caching and is blazingly fast, it might be in a unique position to also provide a fast way to get such an adjustment to file/folder sizes to avoid this hard link duplication.
Not sure how best to do it, so just sharing some ideas:
- for each hard link (e.g. a file with a hard link reference count x>1) the size would be shown as CurrentSize/x (or 3/129=0.023m), and this could propagate to all the folder sizes.
- hard link file size stays the same, but the folder size gets reduced if it contains multiple hard links pointing to the same file within the same folder. This looks much more complicated as it requires comparing the full lists? and doesn't fully deduplicate (e.g. two subfolders with 1 identical hard link will each show the full file size)
- hard link size/folder size stays the same if it's too slow/complicated, but at least there is an additional optional column that shows the number of hard link files inside a given folder and their size so that you could roughly see potentially duplicate size (given the mix of size of each hard link, you won't be able to tell the exact size to deduct from the total, but a rough indication would still be better than nothing)
- or something