No.15933
Message from Administration:
Kemono is now unblocked from Verizon, OpenDNS, and various global ISP bans. Additionally, the site should be a bit faster.
Also, If you are using any downloaders, you should no longer need cookies, so feel free to delete them if you'd like.
No.16342
As you may have noticed, the importer has been disabled. This is due to work being carried out on the site. Expect it to be back up and running when work is concluded. You can start your import now, it will be added to the queue and be imported as soon as the importer is re-enabled.
Sorry for any inconvenience. Thanks for understanding.
No.16365
Works have been finished and the importer re-enabled. Site should be a bit speedier now too, but please keep your downloading to a reasonable level. Thanks for your patience.
No.16589
Due to strange behaviour on our data server Hdds, we are doing several checks to make sure it isn't anything we should worry about, to speed things up we are cutting all connections to the data server, cache servers are still operational and if your file was cached then it will be served without a problem, for non cached files you will notice 5XX Type HTTP errors.
We are estimating that the downtime will be around 1-2 days when the checks are running, during this time imports will be put into the queue and will get run after downtime is over, other site functions will continue to work.
No.16630
Work has now condluded. For more information, see >>16624 as he has a good explanation and some pretty good suggestions for everyone.
No.17618
So all data servers were just checked by me personally and those with badly behaving hdds got their drives swapped for new ones.
Enjoy the new kemono download speed!
No.17640
30-60min maintenance work on the backend servers. Some new data might not load for the time being.
No.17644
Maintenance is finished.
No.17688
Hoo boy, there is no end to these things. Here is a short list that will be followed up with a longer explanation.
Are things fucked? Yes. How fucked? It is fixable, but will take time. Is there data loss? Unlikely, need more time but we have backups.
Will imports work? Yes, I'll make it work in ~12h. How long will it take? Don't know, maybe a week or two. I'll make it so that you are able to get the data somehow.
End of the Month import will run. Just submit your key and once everything is running it will import.
Now the longer explanation while things run in the background.
While the periodic flushing of data from RAID1 to the disk array was ongoing, the disks decided to hard reset themselves to a point where the filesystems are now in a corrupted state.
The corruption seems to only apply to the data that was being flushed to the disks, which is still available on the RAID1 mount. Given the history of this happening before,
alas the filesystem was repairable at that time, this time it was much worse and I refuse to do any modifications to the filesystems/disks. As not to fuck it beyond the current state.
The metadata and data up to the point of the "periodic flushing" seems to be doing fine. The hashes match, the data is readable. This is also the reason why you can view all the data right now.
The disks behave when you read from them at full speed, but god is not on your side when you start writing to them. The CRC errors hint at the backplane shitting itself and leaving skidmarks on the disks.
There is also the possibility of the NCQ(+linux own queue) shitting itself and linux resetting the disk. In my days I experienced a multitude of disks resetting themselves so hard that you would need to physically disconnect and re-connect them, due to fucked queueing.
But in this case the "hard reset" did bring the disks back, so I can only guess that the FW got better or this is a linux thing that is causing problems. And because the drives were recently released, nothing is to be found on mailing lists.
Now, is the data safe? Yes it is. KP data is backed up periodically and the last backup was a bit before the "periodic flushing". The data on RAID1 is currently being backed up. So this is the least of my worries, I hope.
And finally about importing. I will have to juggle a lot, manually prime the cache servers with new data, backup and replicate the data to other servers and a lot more.
BUT I think I will be able to deliver all the data that is being imported until the issues are resolved. I can't promise a smooth ride, but I think I can bridge the issues we have right now.
fml, I just want a comfy ride without any issues, I'm tired. I just want some enterprise grade storage systems with support and not care about anything.
Also, I think I forgot to mention but this is only a data server. Nothing else is going on there. So the other parts are not affected by this.
No.18654
Have ideas for what could be next in regards to the importer? Suggest platforms that could be implemented into the importer in the following form.
This isn't a guarantee that they will be implemented. This is only a way to gauge interest for what MIGHT be implemented in the future.
Link:
https://forms.gle/DQSfhJMG4AmZLmpAA No.18697
This is a forwarded reply from >>18576
I don't have anything to directly add to a certain discussion in here, mostly because it is a mess of random arguments, public displays of mental illness, and the seeking of answers that were already provided several times over. So, instead, let's sync up.
>"Patreon?"
The importer is being fixed. It is broken for complicated reasons that are not impossible, but certainly not trivial to resolve. I promise. I believe it is the foundational core to the entire project, and will not leave it in disrepair longer than is needed.
Moreso, new archiver designs intended to replace Kitsune are being drafted, to the aim of preventing things like this from happening and making the reverse engineering process more streamlined, getting (you) more content from across the paywalled web faster.
>"Requests?"
The maintenance of most "community"-related things became the jobs of other team members months ago, but I stand by their current decisions.
>"Uploader?"
uploading =/= importing, and lack of content updates has absolutely nothing to do with the former
In general, there are multiple development-related reasons for why modtools and the fix never happened, and none of them are very important for the general public to know.
What you should know is that Kemono v3.0 is being worked on, with manual sharing and cloud drive snatching prioritized. Get used to the beta UI, by the way. Some form of it will be fully adopted soon-ish.
No.35620
tl;dr moving databases once more
You know, the previous DB server has been running for over two years with no restarts, no crashes, no nothing except a whole ass network department full of retards fucking shit up.
Followed by weeks long back and forth which involved every NOC and DC contact in between, which was actively ignored by the network retards. Followed up with the higher ups being notified about the mentally challenged department of theirs.
In matter of minutes after that mail the network was fixed, it even came with a love letter that reads "Vacate the account and servers in 30 days. :D". Everything was fucking daijōbu.
Queue the new server, setup ZFS/VMs/DBs/Monitoring, all bells and whistles. Connect all servers back together and run synthetic load on the databases.
All is good and the webservers are talking to the newly migrated databases. Imports are running, everything is fine.
>Notice Input/Output error on a single block.
>table index, kinda fine, regenerate.
>Wait half a day
>Input/Output error on two more blocks
>It's an actual table this time
Now this makes less and less sense, check host storage, checksum errors keep accumulating. NVMEs s.m.a.r.t. reports 0 issues.
>Start checking everything up and down, start testing memory a second time
Check IMPI logs:
>Reading that triggered event 1.73V
>RAM at 1.73V
<what.jpg
After a long memtest, no memory issues. Boot back into the system, move everything from VMs closer to the host system.
Mempage issues gone, checksum errors are no more, did the reboot do good? Let it run overnight, checksum issues are back.
Possibly shitty WD GEN4 NVMe drives, seen some issue threads on the zfs github, maybe not, who knows at this point. But not that many issues from that point on.
HAH!
>watchdog: BUG: soft lockup - CPU# stuck for 5912s!
This FUCKING system. Fucking shoot me, end me right now. So… we are moving DBs once more. This thing can go to hell, I do not want to deal with recovery because of some shit hardware issues. Either way, moving servers once more.
PS: Oh look, website throws 503, THE FUCKING VM DIED AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA