/hydrus/ - Hydrus Network

Hydrus Bunker

Posting mode: Reply

Check to confirm you're not a robot
Name
Email
Subject
Comment
Password
Drawing x size canvas
File(s)

Remember to follow the rules

Max file size: 350.00 MB

Max files: 5

Max message length: 4096

Manage Board | Moderate Thread

Return | Catalog | Bottom

Expand All Images


Version 388 Anonymous Board owner 03/11/2020 (Wed) 22:23:32 Id: 415127 [Preview] No. 561
https://youtube.com/watch?v=X-vdjur459c [Embed]
windows
zip: https://github.com/hydrusnetwork/hydrus/releases/download/v388/Hydrus.Network.388.-.Windows.-.Extract.only.zip
exe: https://github.com/hydrusnetwork/hydrus/releases/download/v388/Hydrus.Network.388.-.Windows.-.Installer.exe
macOS
app: https://github.com/hydrusnetwork/hydrus/releases/download/v388/Hydrus.Network.388.-.macOS.-.App.dmg
linux
tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v388/Hydrus.Network.388.-.Linux.-.Executable.tar.gz
source
tar.gz: https://github.com/hydrusnetwork/hydrus/archive/v388.tar.gz

I had a great week. The client can now save and load searches.

favourite searches

Every tag autocomplete input text box that searches for files--the most obvious being the one on normal search pages--now has a star icon button beside it. Click this, and you get a menu to save your current search, manage your saved searches, or load up one that is saved!

Currently, the saved information is the list of search terms (the tags and system predicates), the current file and tag domains (e.g. my files/all known tags), whether the system is 'searching immediately' or waiting, and optionally the current sort and collect.

Each saved search has a name and optionally a folder, for easy grouping. I have started all clients with an example inbox processing search, just as an example.

If you, like me, keep ten or fifteen empty search pages open (for me it is mostly different creator+inbox searches), I hope this system lets you collapse it down, making for a lighter and simpler session. It should also help for more unusual workflows like duplicate filtering and even file maintenance jobs.

There may be a couple of bugs in the system, something like a collect-by not being saved or updated correctly. Let me know how you get on!


Anonymous Board owner 03/11/2020 (Wed) 22:24:18 Id: 415127 [Preview] No.562 del
the rest

A note from the users managing Hydrus Companion: ''The Chrome Web Store release of Hydrus Companion is no longer available due to publishing issues. If you have been using it in the past, please install the extension manually as outlined here instead: https://gitgud.io/prkc/hydrus-companion‘‘

e621 changed their site format, breaking hydrus's default downloaders. This is now updated, thanks to a user's contribution. The URL format has changed, so unfortunately your subscriptions will take longer (but will not redownload files) and note they have hit their periodic limits on their next cycle, which you can ignore. If you use an e621 login with hydrus, you may also want to check your account-based tag blacklist on the site, as I have heard this has also changed.

Hydrus's default mpv configuration file now has some normalisation that seems to work well.

If your subscriptions are paused, or if all network traffic is paused, this is now noted in the status bar.

full list

- favourite searches:
- hydrus can now save, load, and edit favourite searches. this first system stores searches with a name and an optional folder name, and contains search predicates, file and tag domain, whether the search is live or not, and optionally sort-by and collect-by
- this is program-wide and all accessed through the new 'star' icon menu button beside any 'read' tag autocomplete input on search pages, duplicate pages, export folder ui, and file maintenance selection
- wrote a favourite searches manager
- wrote a dialog to manage favourite searches
- wrote a dialog to edit a single favourite search
- wrote load and save search functionality
- autocomplete dropdowns that have buttons beside them now stretch their floating dropdown windows across the button width also
- cleaned a variety of search code, simplifying objects and responsibility
- cleaned up some collect-by ui code
- refactored sort and collect controls to better location
- refactored search constants
- numerous small search code fixes and cleanup
- renamed clientguipredicates to clientguisearch


Anonymous Board owner 03/11/2020 (Wed) 22:26:10 Id: 415127 [Preview] No.563 del
- the rest:
- a note from the users managing Hydrus Companion: The Chrome Web Store release of Hydrus Companion is no longer available due to publishing issues. If you have been using it in the past, please install the extension manually as outlined here instead: https://gitgud.io/prkc/hydrus-companion
- the default e621 downloader is updated to their new system, thanks to a user's submission. if you log in to e621 with hydrus or the hydrus companion and discover some tags are now blacklisted, please check your blacklist settings on your account on the site
- an old test e-hentai login script from 2018 that is no longer in the client defaults will be deleted from clients that still have it today. if the user has no other login script for e-hentai, the domain entry will be deleted as well. this removes potential technical barriers for users that wish to use hydrus companion to access e-hentai, which is now the recommended method
- hydrus mpv now has an appropriate stream title, which propagates up to the os-level sound mixer. it was previously the ugly hydrus filename
- improved error handling when mpv is passed an invalid conf
- the default mpv conf now has audio normalisation that seems to work ok
- fixed an issue with the 'delete/move out missing/corrupt file' file maintenance job where record deletes were not processing correctly. it now deletes the file record correctly and also clears that deletion record, to make re-import of the correct file, if found, easier
- all hydrus menu labels are now "middle...elided" when they are greater than 64 characters
- all new hdd, url, and simple download pages should now obey the 'remove files when trashed' rule. pages in existing sessions will not
- updated the user-created CutieDuck darkmode qss file to the latest version, which alters the recent hydrus qss styling colours like green/red button labels
- did a full pass of all service fetching--all file and tag services should now present in lists and tabs in service_type, alphabetical order, e.g. for manage tag siblings, the tabs will always be local_tags, tag_repositories, both in alphabetical order
- fixed an issue where a 'get darker or lighter comparison colour' calculation was not working well for black or very dark colours
- if subscriptions or general network traffic is paused, the bandwidth section of the main gui statusbar now says it
- the status bar now tooltips each section
- clarified some labels on the edit url class panel
- moved all delayed focus-shifting code to a more stable system
- cleaned up how the global icon cache is initialised and referenced
- updated the hydrus project gitignore to hide all db, log, server, recovery, and media files that could be under the db directory
- updated the endchan links in the help to have a .org secondary link
- more general code refactoring

next week

Next week is a 'small jobs' week, so I will be back to catching up on small things and general mpv/shortcuts/cleanup work. Also Deviant Art are perhaps going through some more layout changes of their own, so I will be looking at that.

As the virus hits, I expect to keep working on hydrus as normal. I am supplied, low social contact, and healthy. I hope you are as well. If we all end up locked down for a month, I hope hydrus can offer a distraction. I will post if I have to stop for a couple of weeks.


Anonymous 03/13/2020 (Fri) 16:11:08 Id: 0f978d [Preview] No.564 del
Hey there.

So with the update from 386 to 388 my e621 subscriptions' "last known url" detection broke.

As a workaround I changed the gallery parser to still generate post URLs in the old format and the URL class. This works because e621 redirects requests to old pages to the new pages.

But if e621 removes the redirect, I'd finally be out of luck.

Do you have a suggestion for a better way to deal this?

Thanks!


Anonymous Board owner 03/15/2020 (Sun) 21:11:58 Id: b84bd6 [Preview] No.565 del
Damn, thank you for this report.

I'll roll out a 'e621 post page (old format)' url class to re-detect these for this week. I did a similar thing for Deviant Art some time ago.

At some point I would like en masse url conversion in the hydrus db, or virtualised url storage support (e.g. storing 'this is e621 file, id 123456' rather than the url itself, and then creating the current url format on demand), but these will both be some time away, so we'll have to manage with patches for now.


Anonymous Board owner 03/15/2020 (Sun) 21:20:31 Id: b84bd6 [Preview] No.566 del
>>564
>>565
And now I realise I didn't read your post correctly, sorry. I thought you were trying to find files using system:known url and couldn't match e621 old url format, which also needs a fix.

There isn't a great solution for the subscription synchronisation at the moment--it would need an url virtualisation or mass conversion system to be able to switch over. I think I say above in the release post, unfortunately for now your subscriptions will resync an additional 'periodic limit' (the bit where it says 'on normal checks, get at most this many newer files') of urls for one sync cycle as they adjust, then they will be fine again on future checks, as they have the new url format to work on. It is a bit like they are starting fresh.

Note that while your e621 subs will hit these URLs, they will not redownload the files as the e621 parser offers an md5 hash check. It'll just hit the html (or I think, now json).

As subs are automatic systems, I recommend all subscriptions get a small 'periodic file limit' for these sorts of 'damn, it went wrong' cases.


Anonymous 03/16/2020 (Mon) 18:25:51 Id: 922700 [Preview] No.567 del
>>566
That's exactly how I even noticed that there was a problem: It did appear to re-download all the files. At least the amount of data and time required (about 5,6GB and a few hours) for about 15k files would indicate that.

By the time I was expecting subs to be finished, it had barely started, because it was moving so slow.


Anonymous Board owner 03/17/2020 (Tue) 00:18:11 Id: 1d0c54 [Preview] No.568 del
>>567
Damn, after looking at this problem more, it looks like the new e621 parser is not pulling md5 reliably any more. I believe this did work, and perhaps it does still work for some posts. I am not sure why this is, or if it is a secondary change e621 have only just made in the past few days. I am very sorry for the inconvenience. A different user sent me an updated e621 parser this week that I was already going to be rolling in to 389 that pulls rating tags and the md5 in a different and more reliable way. That should fix the bandwidth issue here.

Thank you for the follow-up. I really do hope to have a better fix for url matching in future so we do not have these headaches when sites decide to reformat their urls.

I recommend you pause your e621 subs for now.


Anonymous 03/17/2020 (Tue) 16:54:53 Id: 89b002 [Preview] No.569 del
>>568

Thank you. For now my subs work, since I made the workaround in the parser to pull URLs with the old style.

For 389 I'll revert that and check if the MD5 detection is working again.

I'll most likely send a reply here how it went.

Thanks for all your work.


Anonymous Board owner 03/18/2020 (Wed) 04:31:17 Id: 9294f1 [Preview] No.570 del
I had a great week. I fixed many small bugs, added some quality of life, and am rolling in updated downloaders for e621 and Deviant Art.

The release should be as normal tomorrow.


Anonymous 03/23/2020 (Mon) 23:16:00 Id: 3141c1 [Preview] No.576 del
>>570

Just wanted to let you know that it works as expected now.

Subs re-download pages up to the limit and then skip the images due to the md5 check.

I might go through my subs to delete all the duplicate links, but I'm not too keen on that. Will decide on a whim.

Thanks!


Anonymous 03/26/2020 (Thu) 12:16:30 Id: 3baf08 [Preview] No.584 del
>>576

Btw, what happens, when I remove "file import status" entries from a subscription query:
- Does this affect a file's "known URLs"?
- Does this affect a file's meta data, such as "time imported"?

Or do I only "risk" confusing the subscription about which files it already downloaded, when it checks for new files? Because in that case I might go on a deletion-spree to get rid of a few hundred thousand entries.

Thanks!


##+ndD36 Board owner 03/30/2020 (Mon) 19:46:08 Id: be886b [Preview] No.591 del
Sorry for late replies, I missed these.

>>576
Great!

>>584
tl;dr: you can leave them alone, it isn't a problem to have them in there.

A subscription needs the gallery's latest couple of pages' worth of URLs to work well. It bases its timings off them. The 'known url' stuff all works through a different mechanism at the more permanent db level, a sub keeps URLs to know what and when it should get in its syncs. If you really want, you can delete some old ones, but it normally is not worth the human effort, and if you nuke too many, your sub will get too much in its next run.

Don't worry about surplus URLs, a sub automatically deletes old ones from its cache when it knows it won't need them any more. You can sometimes get a spike of many urls in a sub, but they'll typically settle down to, I think, about 200 per query.


Anonymous 03/30/2020 (Mon) 21:08:03 Id: 25ac7f [Preview] No.595 del
>>591

I have never seen a sub delete an old URL. There are tens of thousands URLs from years ago in there.

Maybe you are talking about something else though. Can you elaborate on that cache clean up? What the conditions roughly are? Maybe I have configured something in a way that messes it up.

Good to know that it's only for the what and when. I have overriden the automatic scheduling with a fixed one, since I kinda like to know when things happen. And for the "what", it's probably enough to keep somewhat recent URLs, so I could probably purge some ancient entries.

Thanks!


Anonymous 04/01/2020 (Wed) 04:25:11 Id: d89d54 [Preview] No.605 del
Is using hydrus for managing a collection that includes doujins a terrible idea?

Or would I simply have to script auto-tagging of those doujins with proper author & title + page index? (Of course I have that info in .json files made from extracted metadata of the source sites)


Anonymous 04/01/2020 (Wed) 04:26:46 Id: d89d54 [Preview] No.606 del
>>605
I suppose I'd need to compile a metadata/hash associative DB before importing anything.


Anonymous Board owner 04/05/2020 (Sun) 19:14:29 Id: af0af2 [Preview] No.617 del
>>605
>>606
It is a mix. It works, but I don't like how hydrus manages paged/chaptered media at the moment.

If you have nice cbr/cbz files, I'd say don't import their files to hydrus for now, and just use another comic reader software. I expect to have actual cbr/cbz support in the next two years, with hydrus able to go into the archive and read through the separate file pages in the media viewer like any other reader. I am now generally of the opinion that treating chapters/volumes as single files is better for comics than treating pages as separate files.

That said, if you can get nice page/chapter/series tags to your files using filename-import-parsing, hydrus does work, and it is usually easy to reverse the operation as you can export the files again with nice filenames. It just takes that extra effort, and you have to get the page tags correct or it is a clusterfuck to try to fix atm.

Maybe you can try importing one chapter or volume, without deleting the original file(s), and seeing how you like to read that in hydrus, with the different sort/collect choices. If it works better than a proper comic reader, then yeah I'd say do more, otherwise hold off a bit.


Anonymous Board owner 04/05/2020 (Sun) 19:33:23 Id: af0af2 [Preview] No.618 del
>>595
Hmm, that's odd. I was writing out the rules here, which involve calculating the 'cull' time period, and realised your fixed scheduling may be producing a weird one. Normally, as long as there are more than 250 urls in a query's file import cache, then urls that have a source time more than twice the sub's death period are deleted. The death period is the '90' part in 'sub query is dead if less than 1 url per 90 days', so typically 180 days.

If your death period is set to 360 days or something, then even if there are a thousand import objects in the sub, they may not be being culled down to 250 because they are probably all still less than two years old and hence relatively new according to the sub's timer and could be involved in future timing decisions.

This of course matters less for you with the static checking timing. It doesn't matter if 17 files came in in the past m days or if 3 did, you are still going to check every n days.

I think I'll do two things here--make the cull period calculation a bit cleverer than just the safe 'twice' the death period, and also cap the cull period to 90 days or so when the sub has static check timing as you do.

If your subs have very high death periods, even if you have the '1' part of '1 url per n days' set to '0' to remove the dead check altogether, I recommend you reduce it to 90 or 180 days, especially for subs that reliably give new files.



Top | Return | Catalog | Post a reply