This version (2017/05/27 13:44) is a draft.
Approvals: 0/1

[00:24:02] <nevenr> AlexLehm: f.y.i. When I sending email with io.vertx.ext.mail.MailClient in log I getting: “84335 [vertx-blocked-thread-checker] WARN i.v.core.impl.BlockedThreadChecker - Thread Thread[vert.x-eventloop-thread-2,5,main] has been blocked for 3649 ms, time limit is 2000”

[00:25:54] <AlexLehm> nevenr: do you have a stack trace from that?

[00:26:13] <AlexLehm> there should be a stack trace after blocked for 5 seconds I think

[00:27:11] <nevenr> nope, I have blocks only up to 4279 ms

[00:28:57] <nevenr> 254349 [vert.x-eventloop-thread-2] DEBUG i.v.e.mail.impl.SMTPConnectionPool - getConnection() 254349 [vert.x-eventloop-thread-2] DEBUG i.v.e.mail.impl.SMTPConnectionPool - create a new connection 254349 [vert.x-eventloop-thread-2] DEBUG io.vertx.ext.mail.impl.SMTPStarter - connection.openConnection 254895 [vert.x-eventloop-thread-2] DEBUG io.netty.handler.ssl.SslHandler - [id: 0x30a690aa, / ⇒

[00:29:36] <nevenr> I have logs but i can't send it here ….

[00:30:48] <AlexLehm> could you put the logs up on or gist?

[00:31:17] <AlexLehm> may be an issue with ssl actually, this is not completely async I think

[00:34:52] <nevenr> AlexLehm:

[00:35:40] <nevenr> AlexLehm: Yes I use SSL in SMTP.

[00:36:05] <AlexLehm> the time for the ssl is only 0.3s, that can't be it

[00:36:56] <AlexLehm> is run by gmx, I have tested with their service, but I have not tested yet

[00:37:33] <nevenr> Can I decrease 5 seconds for stacktrace to 4 or less ?

[00:38:14] <AlexLehm> you can set a config property to set the time

[00:38:25] <AlexLehm> i have to look that up

[00:45:03] <AlexLehm> you can set the time with setWarningExceptionTime(2000) on the VertxOptions object you use to create Vertx instance

[00:46:10] <AlexLehm> actually its setWarningExceptionTime(2000000000)

[00:46:12] <AlexLehm> ns

[00:47:04] <nevenr> ok I going to try it

[00:50:00] <AlexLehm> does not allow registrations since I am from Germany, but when I try to send a mail with incorrect credentials it gives an auth error without blocking

[00:56:39] <nevenr> AlexLehm: I found reason

[00:56:55] <nevenr> io.vertx.core.VertxException: Thread blocked at Method) ~[na:1.8.0_66]

[00:57:03] <nevenr> DNS name resolution

[00:57:58] <AlexLehm> that is right, that may be blocking

[01:00:37] <nevenr> Thanks AlexLehm, I learned something tonight.

[01:01:28] <AlexLehm> if that is in the method io.vertx.ext.mail.mailencoder.Utils.getMyHostname(), I could fix that

[01:02:12] <AlexLehm> you can work around that by setting a hostname in the MailConfig object though

[01:02:31] <AlexLehm> setOwnHostname

[01:06:21] <nevenr>

[01:06:34] <nevenr> stackteace

[01:07:10] <nevenr> yes it is in “at io.vertx.ext.mail.impl.SMTPInitialDialogue.getMyHostname( ~[vertx-mail-client-3.1.0.jar:na]”

[01:08:28] * ChanServ sets mode: +o temporalfox [01:11:37] <AlexLehm> ok, i will implement a fix for that, in the meantime, you can work around that issue by setting the ownHostname property [01:11:53] <AlexLehm> got to turn in, its already 1am in my timezone [01:12:00] <AlexLehm> nevenr: bye [01:12:50] <nevenr> AlexLehm: Thanks a lot, 1am in mine tz too. Regards from Serbia. [01:13:25] <nevenr> AlexLehm: gute nacht [05:22:58] * ChanServ sets mode: +o temporalfox

[10:58:02] <cescoffier> Hi everyone, just to let you know, vertx 3.2.0 has been released:

[11:02:36] <tsegismont> Bravo

[11:30:55] <temporalfox> congrats to everyone involved :-)

[16:03:58] <aesteve> hi :) congrats for 3.2.0 !

[18:28:42] *** ChanServ sets mode: +o temporalfox

[22:54:32] <azhuchkov> @AlexLehm regarding to the issue I suppose you are right that S3 closes connection after several requests. As I see also there is a timeout (s3 doesn't keeps connection open infinitly). But if it closes channel gracefully, vertx must handle it and and remove from pool. My very first assumptions was S3 terminates connection abruptly, but CLOSE_WAIT state means it sent FIN packet.

[23:03:40] <AlexLehm> i think the http protocol usually has Connection: keep-alive or Connection: close in the reply so that the client can decide if the connection may be reused plus if the connection is closed by the server it should be handled by the client as well

[23:04:24] <AlexLehm> if a timeout occurs on the keep alive connection, this has to be handled by the client as well

[23:04:58] <AlexLehm> the vertx client has a keepAliveTimeout parameter as well, so this may be handled, haven't properly check that though

[23:05:23] <AlexLehm> I have created a small example server that fakes a http server and i think that sortof reproduces the issue

[23:12:12] <azhuchkov> Maybe you meant idle timeout? I can't find keepAliveTimeout

[23:13:03] <azhuchkov> So, your fake server shows the problem?

[23:13:31] <AlexLehm> yes, idle timeout is it

[23:13:48] <AlexLehm> it gives a connection closed exception on the 3rd request

[23:13:49] <AlexLehm>

[23:17:22] <AlexLehm> I think its the same issue but the effect is a bit different

[23:20:51] <azhuchkov> not sure. here client got a connection from the pool and it become closed then. probably request should be retried here at the connection manager level, but nevertheless it can be handled.

[23:22:18] <AlexLehm> i think it should not return the connection the pool if connection keep-alive is not detected

[23:24:05] <AlexLehm> i am doing the test with a pool size of 1, but that should be the same issue

[23:24:50] <AlexLehm> it should handle the connection closed exception more gracefully, but in the case of a http compliant server it should not run in the issue in the first place (i think)

[23:26:17] <azhuchkov> my case is when pool is full of half-closed closed connections (closed by remote endpoint), but local endpoint continue to use them.

[23:26:50] <AlexLehm> yes, that is the same thing only it happens directly since i have a pool size of 1

[23:26:52] <azhuchkov> you get connclosedexceptions because your local endpoint closed them too

[23:27:16] <AlexLehm> actually I could do the same with a larger connection pool, let me try that

[23:27:29] <azhuchkov> it's like: channel.close(); - you get exception immidiatly

[23:28:22] <azhuchkov> but if you didn't invoke channel.close() your read() will hang (I assume)

[23:33:34] <azhuchkov> so, as I see: 1. remote endpoint makes channel.close(); 2. local TCP driver receives FIN packet. 3. local endpoint gets an READ event on the channel.

[23:34:27] <azhuchkov> 4. local handler makes and it returns -1 (closed). 5. local handler must do channel.close() and remove it from the pool

[23:36:54] <azhuchkov> after this scenario everything should work fine, but if you skip close handling, you will get the my situation

[23:38:57] <AlexLehm> you mean channel close in the client?

[23:41:02] <azhuchkov> yep

[23:42:49] <AlexLehm> ok, then i think where the issue is, let me check that

[23:48:00] <AlexLehm> looks like the code handles the close on the server side when it tries to do a new request

[23:51:49] <azhuchkov> it's a handler assignment

[23:52:17] <AlexLehm> azhuchkov: actually I have a queston about your computer, do you use english language settings? I have some kind of issue since I use non-english locale in windows, I am not sure if that is part of the issue I see

[23:53:11] <AlexLehm> yes that should do the close processing of the connection

[23:53:48] <azhuchkov> AlexLehm: yes, i use english locale

[23:54:00] <AlexLehm> ok

[23:54:26] <AlexLehm> I have updated the gist to use more connections, could you run that in your env to see what output it gives?