• Prealphas updated

    From g00r00@21:1/108 to All on Sat Feb 29 13:17:25 2020
    Updated the prealphas again.

    This should fix the "first SSH connection when in daemon mode fails" issue (hopefully).

    It should also introduce the POLL option for mis which will eventually
    replace FIDOPOLL so feel free to experiment with its goodness.

    --- Mystic BBS v1.12 A46 2020/02/29 (Windows/64)
    * Origin: Sector 7 (21:1/108)
  • From Vk3jed@21:1/109 to g00r00 on Sat Feb 29 19:01:00 2020
    On 02-29-20 13:17, g00r00 wrote to All <=-

    Updated the prealphas again.

    This should fix the "first SSH connection when in daemon mode fails"
    issue (hopefully).

    It should also introduce the POLL option for mis which will eventually replace FIDOPOLL so feel free to experiment with its goodness.

    Cool. I'll wait until A46 comes out, but this sounds nice. :)


    ... Some call me the gangster of love.
    === MultiMail/Win v0.51
    --- SBBSecho 3.10-Linux
    * Origin: Freeway BBS Bendigo,Australia freeway.apana.org.au (21:1/109)
  • From Avon@21:1/101 to g00r00 on Sat Feb 29 22:21:51 2020
    On 29 Feb 2020 at 01:17p, g00r00 pondered and said...

    It should also introduce the POLL option for mis which will eventually replace FIDOPOLL so feel free to experiment with its goodness.

    The impact on speed to send traffic to nodes using MIS POLL is amazing!

    Just needed to share that. Thank you.

    --- Mystic BBS v1.12 A46 2020/02/29 (Windows/32)
    * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101)
  • From g00r00@21:1/108 to Avon on Sat Feb 29 16:51:08 2020
    It should also introduce the POLL option for mis which will eventuall replace FIDOPOLL so feel free to experiment with its goodness.

    The impact on speed to send traffic to nodes using MIS POLL is amazing!

    Just needed to share that. Thank you.

    Awesome I am glad to hear it! There seem to be a couple wonky display issues with the "currently connected" window but hopefully I can get those resolved shortly. They don't seem to affect any actual data transmission or the standard log window.

    Now that we have this done, next up lets get that error tracking and crash/hold/deactivate system done! In an odd scenario, I am having trouble deciding what I want to call the stanza in the .INI file for that :)

    Intentions are:

    Track failed connections (outbound) and demote from crash to hold
    Track failed authentications
    Track last interaction date/time (done) and deactivate after X days

    But back to the day stanza... Something like that
    [EchoNodeGovernor]

    But I don't like that. Any suggestions?

    --- Mystic BBS v1.12 A46 2020/02/29 (Windows/64)
    * Origin: Sector 7 (21:1/108)
  • From roovis@21:4/165 to g00r00 on Sat Feb 29 08:36:24 2020
    Updated the prealphas again.

    This should fix the "first SSH connection when in daemon mode fails"
    issue (hopefully).

    It does fix the first SSH connection fails. But now it seems every connection made creates a zombie process. I just connected twice to the daemon mode MIS, and it happens immediately.

    -roovis

    --- Mystic BBS v1.12 A46 2020/02/29 (Linux/64)
    * Origin: w0pr.win (21:4/165)
  • From g00r00@21:1/108 to roovis on Sat Feb 29 22:00:59 2020
    It does fix the first SSH connection fails. But now it seems every connection made creates a zombie process. I just connected twice to the daemon mode MIS, and it happens immediately.

    That process is part of the open node, it will go away when the user logs
    off. Do a grep before during and after logout and you'll see it go from non-existant, to existing, to being removed.

    I used to know more details about it but its been so long now I can't
    remember the circumstances surrounding it. Maybe I can clean it up but
    really its just a display issue almost.

    --- Mystic BBS v1.12 A46 2020/02/29 (Windows/64)
    * Origin: Sector 7 (21:1/108)
  • From Avon@21:1/101 to g00r00 on Sun Mar 1 09:58:01 2020
    On 29 Feb 2020 at 04:51p, g00r00 pondered and said...

    Intentions are:

    Track failed connections (outbound) and demote from crash to hold
    Track failed authentications
    Track last interaction date/time (done) and deactivate after X days

    But back to the day stanza... Something like that
    [EchoNodeGovernor]

    [EchoNodeTrack]

    --- Mystic BBS v1.12 A46 2020/02/29 (Windows/32)
    * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101)
  • From roovis@21:4/165 to g00r00 on Sat Feb 29 15:40:26 2020
    remember the circumstances surrounding it. Maybe I can clean it up but really its just a display issue almost.

    Yeah, there does not seem to be any negative performance issues or anything.

    -roovis

    --- Mystic BBS v1.12 A46 2020/02/29 (Linux/64)
    * Origin: w0pr.win (21:4/165)
  • From g00r00@21:1/108 to Avon on Sun Mar 1 08:43:07 2020
    But back to the day stanza... Something like that
    [EchoNodeGovernor]

    [EchoNodeTrack]

    Ok so we have two contestants!

    [EchoNodeGovernor]
    [EchoNodeTracker]

    WHO WILL WIN?! We need votes!

    --- Mystic BBS v1.12 A46 2020/02/29 (Windows/64)
    * Origin: Sector 7 (21:1/108)
  • From g00r00@21:1/108 to Avon on Sun Mar 1 09:05:05 2020
    [EchoNodeTrack]

    Here is what I am working on so far, please provide feedback if you have any:

    [EchoNodeTracker]

    ; Automatically reset the echonode tracking statistics after a specific
    ; number of days (or 0 to disable)

    reset_stats = 0

    ; Set the number of days of inactivity before an Echomail Node is
    ; automatically deactivated (or 0 to disable)

    inactivity = 0

    ; When set to TRUE, MUTIL will remove any files or mail packets from the
    ; node's outbound queue upon deactivation from inactivity

    clear_outbound = true

    ; When Mystic is unable to connect outbound to a node it can automatically
    ; change their mail type from "Crash" to "Hold" after a specific number of
    ; outbound connection failures (or 0 to disable)

    crash_errors = 0

    --- Mystic BBS v1.12 A46 2020/02/29 (Windows/64)
    * Origin: Sector 7 (21:1/108)
  • From Avon@21:1/101 to g00r00 on Sun Mar 1 15:19:08 2020
    On 01 Mar 2020 at 08:43a, g00r00 pondered and said...

    [EchoNodeGovernor]
    [EchoNodeTracker]

    WHO WILL WIN?! We need votes!

    Could be [EchoNodeTerminator] or [EchoNodeIllBeBack] :)

    --- Mystic BBS v1.12 A46 2020/02/29 (Windows/32)
    * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101)
  • From Black Panther@21:1/186 to Avon on Sat Feb 29 19:22:12 2020
    On 01 Mar 2020, Avon said the following...

    [EchoNodeGovernor]
    [EchoNodeTracker]

    WHO WILL WIN?! We need votes!

    Could be [EchoNodeTerminator] or [EchoNodeIllBeBack] :)

    [EchoNodeYouBetterBeBack] ;)


    ---

    Black Panther(RCS)
    Castle Rock BBS

    --- Mystic BBS v1.12 A45 2020/02/18 (Linux/64)
    * Origin: Castle Rock BBS - bbs.castlerockbbs.com (21:1/186)
  • From Avon@21:1/101 to g00r00 on Sun Mar 1 15:28:09 2020
    On 01 Mar 2020 at 09:05a, g00r00 pondered and said...

    Here is what I am working on so far, please provide feedback if you have any:

    ; Automatically reset the echonode tracking statistics after a specific ; number of days (or 0 to disable)

    reset_stats = 0

    This is for all echonodes I take it? Seems a good idea, but I'm not sure how often it may be called on. I guess it depends on the types of stats reports Mystic will eventually generate for echomail nodes. I can see how in some
    cases you might want to have say a daily, weekly, monthly total for some of those reports, perhaps even longer. If it's one reset to wipe all stats then
    I can't see it being used too much.

    Perhaps consider more granular options for resetting data? I'm not sure how that may work / be feasible, perhaps something like

    reset_daily
    reset_weekly
    reset_monthly

    I'm not sure, but just a thought...


    ; Set the number of days of inactivity before an Echomail Node is
    ; automatically deactivated (or 0 to disable)

    inactivity = 0

    Yep all good

    ; When set to TRUE, MUTIL will remove any files or mail packets from the ; node's outbound queue upon deactivation from inactivity

    clear_outbound = true

    Yep also good, it should clear echomail and filebox.

    ; When Mystic is unable to connect outbound to a node it can automatically ; change their mail type from "Crash" to "Hold" after a specific number of ; outbound connection failures (or 0 to disable)

    crash_errors = 0

    This looks fine bar the fact I am unsure how many failures should count, perhaps this should be a figure expressed in days or hours?

    --- Mystic BBS v1.12 A46 2020/02/29 (Windows/32)
    * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101)
  • From Avon@21:1/101 to Black Panther on Sun Mar 1 15:28:35 2020
    On 29 Feb 2020 at 07:22p, Black Panther pondered and said...

    [EchoNodeYouBetterBeBack] ;)

    here we go :)

    --- Mystic BBS v1.12 A46 2020/02/29 (Windows/32)
    * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101)
  • From g00r00@21:1/108 to Avon on Sun Mar 1 10:40:25 2020
    Could be [EchoNodeTerminator] or [EchoNodeIllBeBack] :)

    Maybe both! The terminator will deactivate people and IllBeBack will reactivate after a set period of deactivation time lol

    --- Mystic BBS v1.12 A46 2020/03/01 (Windows/64)
    * Origin: Sector 7 (21:1/108)
  • From g00r00@21:1/108 to Avon on Sun Mar 1 10:50:01 2020
    reset_stats = 0

    This is for all echonodes I take it? Seems a good idea, but I'm not sure how often it may be called on. I guess it depends on the types of stats reports Mystic will eventually generate for echomail nodes. I can see
    how in some cases you might want to have say a daily, weekly, monthly total for some of those reports, perhaps even longer. If it's one reset
    to wipe all stats then I can't see it being used too much.

    It will reset stats based on that many days. So if you want to reset node stats every 30 days you'd set it to 30.

    Perhaps consider more granular options for resetting data? I'm not sure how that may work / be feasible, perhaps something like

    reset_daily
    reset_weekly
    reset_monthly

    You can sort of already do this if you set it to reset 7 for weekly for example. But I am open to any ideas of course if you can think of a better
    way to do this.

    ; When set to TRUE, MUTIL will remove any files or mail packets f the ; node's outbound queue upon deactivation from inactivity

    clear_outbound = true

    Yep also good, it should clear echomail and filebox.

    It should clear all echomail and filebox. I've already finished the implementation of all the features I showed in the original message but
    none of it is actuall tested.

    I also fixed the MIS POLL display error too.

    ; When Mystic is unable to connect outbound to a node it can automatically ; change their mail type from "Crash" to "Hold" after a specific number of ; outbound connection failures (or 0 to disable)

    crash_errors = 0

    This looks fine bar the fact I am unsure how many failures should count, perhaps this should be a figure expressed in days or hours?

    That makes sense. Maybe a combination of two factors?

    I could base it entirely off of the last successful connection time but then the issue with that is if you screw up your batch file and Mystic never actually TRIES to send anything (because you called "miss poll" instead of "mis poll") then you could end up with all your crash nodes being demoted to hold (when their setup was fine and yours was actually broken).

    Because of that maybe it should be based off of a dual factor:

    ; Node must have a combination of both of th following values before having
    ; mail type and filebox type set to hold:

    ; Number of bad outbound connection attempts
    crash_errors = 7

    ; Number of days since last successful outbound connection
    crash_days = 7

    Another idea is that it COULD reset the "Crash errors" value tracker every time it successfully connects. So the error count would be the number of failed attempts since the last successful attempt. But then that is confusing when you have things like "reset statistics" because some of them are already resetting on their own! lol

    Lots of ways to think about it. Right now I think the dual factor mentioned above makes the most sense to me.

    --- Mystic BBS v1.12 A46 2020/03/01 (Windows/64)
    * Origin: Sector 7 (21:1/108)
  • From Avon@21:1/101 to g00r00 on Sun Mar 1 17:24:45 2020
    On 01 Mar 2020 at 10:50a, g00r00 pondered and said...

    Perhaps consider more granular options for resetting data? I'm not su how that may work / be feasible, perhaps something like

    reset_daily
    reset_weekly
    reset_monthly

    You can sort of already do this if you set it to reset 7 for weekly for example. But I am open to any ideas of course if you can think of a better way to do this.

    But if you reset every 7 days how would you ever get the 30 days stats?

    That's kinda where my brain is going, just not sure how to achieve both at
    the expense of one?

    Yep also good, it should clear echomail and filebox.

    It should clear all echomail and filebox. I've already finished the implementation of all the features I showed in the original message but none of it is actuall tested.

    Yep that sounds good. And some kind of report is generated somewhere for
    admin and/or for possible posting to echomail?

    I also fixed the MIS POLL display error too.

    Coolio... it seems the display window is limited in that you can only see how many concurrent active connections the window will display, so if it were 12 I'm not sure how it would display? Regardless it seems a nice to see thing,
    but the functionality of increased concurrency is the main thing in my mind that appeals the most to me.. it's great.

    This looks fine bar the fact I am unsure how many failures should cou perhaps this should be a figure expressed in days or hours?

    That makes sense. Maybe a combination of two factors?

    I could base it entirely off of the last successful connection time but then the issue with that is if you screw up your batch file and Mystic never actually TRIES to send anything (because you called "miss poll" instead of "mis poll") then you could end up with all your crash nodes being demoted to hold (when their setup was fine and yours was actually broken).

    Yep that would be bad.

    Because of that maybe it should be based off of a dual factor:

    ; Node must have a combination of both of th following values before having ; mail type and filebox type set to hold:

    ; Number of bad outbound connection attempts
    crash_errors = 7

    ; Number of days since last successful outbound connection
    crash_days = 7

    I like this. what about if you have a node that is only ever set up for HOLD so you never poll it but you need to know if it's been inactive polling in?

    I know we have an inactivity setting being kicked around, but how are you defining inactivity?

    ; Set the number of days of inactivity before an Echomail Node is
    ; automatically deactivated (or 0 to disable)

    inactivity = 0

    e.g.

    Inactive: 0 days Last In: Never
    Received: 27 (29KB) Last Out: 01 Mar 2020 17:06
    Sent: 3,607 (7,037KB) Reset: Never

    Is inactive only looking at Last Out, and that is recorded as the date a node had traffic crashed to it or the node picked up packets on hold for it at the HUB?

    Another question,

    ; When set to TRUE, MUTIL will remove any files or mail packets from the
    ; node's outbound queue upon deactivation from inactivity

    clear_outbound = true

    Will Mystic know the difference between an echomail node set as inactive by
    the sysop and one the system has determined by the rules being set in this stanza is something to be removed?

    You wouldn't want a deliberately set inactive node by the sysop to be wiped.

    I guess you could add a switch in the echomail node settings much like in the User editor when you set a user to be deleted... so you can set Clear Outbound manually to Yes and have MUTIL deal to the echomail node packets/files...
    Or even one to protect against an automated purge... perhaps that's overkill but I am thinking along the lines of the User Editor options and trying to apply such notions to echomail nodes :)

    --- Mystic BBS v1.12 A46 2020/02/29 (Windows/32)
    * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101)
  • From g00r00@21:1/108 to Avon on Sun Mar 1 11:40:41 2020
    But if you reset every 7 days how would you ever get the 30 days stats?

    That's kinda where my brain is going, just not sure how to achieve both
    at the expense of one?

    What will happen is you'd set this to reset at the maximum interval of time you want to report on. The data in those numbers aren't really granular enough for any in-depth reporting its more of a summary and basic error tracking.

    There is probably going to be some more granular tracking that happens per-node where major events are tracked like "<timestamp> Connected to <addr>" "<timestamp> Received fsx_info.zip (12,345 bytes)". This data will also reset with the reset of all stats so it doesn't grow unmanagable in size.

    You may already have some echodata.ID.dat in your DATA directory which will probably need to be manually removed at some point because I accidentally had that enabled in a prealpha. :) But that is going to be a major even log on a per-node basis (or thats how I dreamed it up maybe it could change)

    Yep that sounds good. And some kind of report is generated somewhere for admin and/or for possible posting to echomail?

    For the deleting of echomail data? It just logs it in the mutil.log. If you have level 3 on it will show you the path and file of everything it deleted. If not, it just tells you its doing it.

    But back to the first quote above, that sort of thing could be added to the per-node event logging for reporting purposes which would allow you to do just that - create a list of nodes deactivated in the last 7 days for example.

    Coolio... it seems the display window is limited in that you can only
    see how many concurrent active connections the window will display, so
    if it were 12 I'm not sure how it would display? Regardless it seems a

    You can scroll through with the arrow keys if there are 12 but the viewport is only 7 right now and that is the maximum concurrent connections too until its configurable somewhere. It works just like the connection list in MIS proper.

    The reality is that active connections are added and removed so fast that unless you're sending larger amount of data there isn't really much time to scroll around and enjoy the view anyway! But it is capable if you want to.

    I like this. what about if you have a node that is only ever set up for HOLD so you never poll it but you need to know if it's been inactive polling in?

    The "inactive" check that disables a node would catch that as they would be inactive.

    There is no need to set it to Hold because it already is so. The crash check only looks at nodes that have mail type Crash and/or FileBox type Crash.

    I know we have an inactivity setting being kicked around, but how are you defining inactivity?

    Inactivity is the last time you've connected with that node, in any capacity. So if they connect to you, it counts. If you connect to them, it counts. If there is deadspace between the two nodes, it starts counting the inactivity days until it ultimately inactivates based on your setting.

    So inactivity catches hold nodes that go inactive, and the crash system catches crash systems that go cold by settings them to Hold. So if a crash system
    goes dead, first it is downgraded to hold and ultimately deactivated entirely all through automation.

    IE Crash steps down to Hold, Hold expires to deactivated.

    Will Mystic know the difference between an echomail node set as inactive by the sysop and one the system has determined by the rules being set in this stanza is something to be removed?

    You wouldn't want a deliberately set inactive node by the sysop to be wiped.

    It only clears outbound data at the exact time it inactives the node, during the [EchoNodeTracker] process. If a node is already inactive it just ignores it.

    If you manually deactivate a node it will be fine. And now also when you renable a node in the Echonode Editor, Mystic will now ask you if you want to also reset their statistics.

    --- Mystic BBS v1.12 A46 2020/03/01 (Windows/64)
    * Origin: Sector 7 (21:1/108)
  • From g00r00@21:1/108 to All on Mon Mar 16 23:00:40 2020
    Updated the prealphas again today to fix some server SSL code and a couple other minor things.

    --- Mystic BBS v1.12 A46 2020/03/15 (Windows/64)
    * Origin: Sector 7 (21:1/108)