|
Administrator
|
Just want to acknowledge the long timeouts of our jogamp.org server :|
A while ago I moved to mpm_event to solve the issues and other details, now I upped certain metrics (by simply using defaults, installation had a bad config). Hope it helps, may fiddle w/ details if re-occurring. If anybody has more experience (we just have a 4 core CPU, 32GB) how to finetune .. you are welcome. Let's see how it works now. |
|
Administrator
|
jogamp-scripting commit c04b48166645e81bc402bfd59bd7e2d531c8fea5
apache2: Bump up mpm_event: ServerLimit 24, ThreadsPerChild 25, MaxRequestWorkers 24*25=600 |
|
Administrator
|
In reply to this post by Sven Gothel
- updated bugzilla to 5.2
- fine tuned MariaDB + ZFS (see jogamp-scripting repo) After monitoring the server for a while the load on bugzilla (perl) was visible. Since the update wasn't good enough, I realized that MariaDB and the bugzilla/perl processes consumed each 25% CPU. MariaDB seems to have limited overall performance considerably? I found this nice page for MariaDB + ZFS: <https://shatteredsilicon.net/mysql-mariadb-innodb-on-zfs/> After creating a ZFS dataset for /var/lib/mysql w/ recordsize 16K and adding the following to /etc/mysql/my.cnf MariaDB is reduced to < 10%, while bugzilla/perl is at ~100%, i.e. bugzilla is no more DB limited :) |
|
Administrator
|
In reply to this post by Sven Gothel
The DoS/SYN or whatever (attack) is sadly persistent over last days :(
Perhaps we just became famous again :) Can't even SSH to it sporadically OK, will spend more time analyzing the root cause and what to do. Earlier I have adjusted - linux tcp backlog to 5000, also apache2 backlog to 5000 - apache2 mpm metrics won't make a big difference in range 600-1200 max connections - visible bugzilla/perl resource hog - optimized mariadb, good but not enough - disabled via release/rc builds helped a lot ... bear with me .. |
|
Administrator
|
https://jogamp.org/cgit/jogamp-scripting.git/log/
commit d776cf17498fa57b9cbb79a79721ea7fd688bf17 server/setup/05-service-settings: Update MediaWiki Upgrade Info for 1.44 This includes a patch for the Bugzilla extension, see server/setup/05-service-settings/patches/commit-c99dc10-wiki1_41-01.patch applied on top of https://github.com/mozilla/mediawiki-bugzilla commit ef950c90922fd07aaa3e6b2f85cf597bebf1b6aa commit 4e9f399a7328735c0ccfdf090f9f187f4ad0ecc9 apache2 mpm_event: We settle w/ defaults to avoid blocking all tcp access for now Previous enhancements .. - mariadb-zfs - bugzilla upgrade - mediawiki upgrade to 1.44 w/ caching working hopefully - including mediawiki Bugzilla extension patch - be aware that our mediawiki causes bugzilla requests - robots.txt adding bugzilla and wiki Let's hope this is working well so far. |
|
Administrator
|
Had to add the following two /16 networks to the firewall block list :-(
Not obeying robots.txt and I doubt its genuine interest :) Sure, probably 'just' a handful each .. but can't isolate 'em, so I blocked the network .. poor mans solution :( # 8.210.30.40 # AlibabaCloud_HK .. but too many ip 8.210.0.0/16 # AlibabaCloud_HK # 47.243.251.69 # ALIBABA-CLOUD---HK .. but too many ip 47.243.0.0/16 # ALIBABA-CLOUD---HK |
|
Administrator
|
jogamp-scripting commit 6d59cb4e51ad37eb31176656a7f32f26dbd0784e
Bump: Server/apache settings (back to mpm_event, finetuning for SYNREQ attack) Bugzilla running in mpm_prefork and mod_perl doesn't solve the server situation. Hence reverted. We have to disable bugzilla for now or from time to time or perhaps run it in a separate apache2 instance w/ mod_perl + mpm_prefork? Or 'just' moving to an alternative w/ better launch performance :/ |
|
Administrator
|
testing the 2nd apache server instance w/ mod_perl + mpm_prefork,
reached via a proxy config in the main server :) Seems to work. |
|
Administrator
|
scripts netstat-list-abuse.sh and net-count-subs.sh in jogamp-scripting/server/setup/attack/scripts
as well as netstat-openconnections.sh helps analyzing SYN attacks. The last one shows SYN, and a manual run w/o watch shows the IP source. The first one then counts the /16 net and also gives the proper netmask, to be added to iptables to block w/o active reject. Turns out tons of questionable bots within the tencent cloud caused quite some additional troubles sadly. If anybody has a much more NICE approach than blocking /16 networks, please share. Thank you. |
| Free forum by Nabble | Edit this page |
