This version (2017/05/27 13:44) is a draft.
Approvals: 0/1

[13:06:21] <dns_> Hi there! Is it possible to communicate with hazelcast instance when I run my Verticle in cluster mode?

[13:07:08] <dns_> for creating distributed objects. for example

[13:19:51] <cescoffier> dns_ : we provide shared data support

[13:19:53] <cescoffier> vertx.sharedDate()

[13:19:53] <cescoffier> sharedData

[13:22:24] <dns_> ok.. Map structure only.. so.. Can I create a new Hazelcast instance separately from vertx cluster?

[13:26:33] <cescoffier> dns_: yes you can, but it's an experimental feature (we have a few bugs, but we really would like to have this)

[15:54:04] <rajith> temporal_: ping

[15:54:10] <temporal_> hi

[19:18:58] <aesteve> hi everyone

[19:25:33] <aesteve> hi temporal_ I saw your comment on the GoogleGroup I'll try the new version right after dinner

[19:25:51] <temporal_> the send file *should* work :-)

[19:25:58] <temporal_> but I haven't tested it extensively at all

[19:26:06] <aesteve> I'm here for that matter :P

[20:02:22] <temporal_> yes early feedback is important :-)

[20:33:11] <aesteve> temporal_ it works indeed. But it doesn't make any difference :\

[20:34:13] <aesteve> I'm a bit sad :D you see something obvious I'm missing ?

[21:05:49] *** ChanServ sets mode: +o temporalfox

[21:31:05] <aesteve> temporalfox I pushed the latest version with latency as query parameter

[21:31:13] <temporalfox> ah yes good idea

[21:31:18] <temporalfox> you should have a timer

[21:31:21] <temporalfox> before sending data

[21:31:44] <temporalfox> at connection time

[21:32:02] <temporalfox> hard to simulate

[21:32:34] <temporalfox> but the idea is that with http/1.1 it is used happens for each connection

[21:32:41] <temporalfox> and for http2 it happens only once

[21:32:59] <aesteve> latency = Long.valueOf(ctx.request().getParam(“latency”));

[21:33:04] <aesteve> oops

[21:33:12] <aesteve>

[21:33:45] <temporalfox> it should not like that I think

[21:33:52] <temporalfox> it should happen only once per connection

[21:33:55] <aesteve> yes I guess so

[21:34:09] <temporalfox> but it is hard to know which request is for which connection

[21:34:23] <aesteve> but still, even with 0ms latency, there's absolutely no difference

[21:34:43] <temporalfox> I think latency simuation should happen at lower level

[21:34:47] <temporalfox> socket

[21:34:48] <aesteve> so I guess I must have missed something obvious but…

[21:35:39] <temporalfox> what is your os ?

[21:36:04] <aesteve> mac os

[21:36:16] <temporalfox> ipfw

[21:36:20] <temporalfox>

[21:36:55] <aesteve> but even with no latency, shouldn't I see something different ?

[21:37:19] <aesteve> because for instance with gophertiles i somehow see the difference with 0ms latency

[21:37:27] <aesteve> tiles are not loaded in the same order

[21:37:56] <temporalfox> latency has to see with TCP handshake and TCP slow start

[21:38:15] <temporalfox> + the best you can get on HTTP/1.1 is pipelining

[21:38:33] <temporalfox> so you need to wait to have sent fully the first request to send the second request

[21:38:56] <temporalfox> and you need to send the response in the same order than the requests

[21:39:17] <aesteve> vs.

[21:39:34] <temporalfox> second is much faster

[21:39:39] <aesteve> yes

[21:39:43] <temporalfox> but tin their case I think they apply latency to connection

[21:39:51] <temporalfox> need to look at source code

[21:39:52] <aesteve> latency=0

[21:40:05] <aesteve> that's what I'm trying to explain

[21:40:21] <aesteve> even with latency=0 there's a huge difference

[21:40:32] <aesteve> but for me, that's the exact same behaviou

[21:41:02] <temporalfox> what is the behavior you see ?

[21:41:32] <temporalfox> with latency == 0 you should not execute blocking

[21:41:32] <aesteve> the http2 one : tiles loaded in the order they're declared

[21:42:09] <temporalfox> I am going to give a try

[21:42:46] <aesteve> I'll push the no-latency version (no execute blocking) wait a sec

[21:43:06] <temporalfox> ok

[21:43:11] <aesteve> done

[21:43:37] <aesteve> ./gradlew run –refresh-dependencies

[21:43:51] <aesteve> (to fetch the latest version from .m2)

[21:46:18] <temporalfox> ok

[21:46:26] <temporalfox> which deps ?

[21:46:58] <temporalfox> resource not found :-)

[21:47:01] <aesteve> vertx-core with your latest improvements

[21:47:16] <aesteve> https://localhost:4043/image.hbs vs. https://localhost:4044/image.hbs

[21:47:20] <temporalfox> ok

[21:47:46] <temporalfox> 4044 takes 300ms

[21:47:56] <temporalfox> 4043 takes 2126ms

[21:48:00] <aesteve> wow

[21:48:08] <aesteve> there's something wrong with my browser :D

[21:48:34] <temporalfox> if I refresh the second

[21:48:42] <temporalfox> now ittakes 313 ms

[21:48:48] <temporalfox> I think because of caching

[21:48:55] <aesteve> it should not cache

[21:49:02] <aesteve> there's a cache buster query param

[21:49:10] <temporalfox> ok

[21:49:12] <temporalfox> but it's localhost

[21:49:15] <temporalfox> so hard to say

[21:49:27] <temporalfox> I'm going to look at the connection timeline

[21:49:36] <aesteve> <img src=“/assets/…..jpeg?cachebuster=$timestamp” />

[21:50:08] <aesteve> if you see “hit from cache” within Chrome devtools there's something wrong

[21:50:37] <aesteve> (also, you can check the checkbox “disable caching” in the network tab if you want to be sure)

[21:51:26] <temporalfox> ok

[21:51:46] <temporalfox> both are more or less the same now

[21:51:57] <aesteve> +/- 3s ?

[21:52:19] <temporalfox> yes

[21:52:23] <temporalfox> I mean now it's same time

[21:52:33] <temporalfox> actually

[21:52:39] <temporalfox> the time is the same

[21:52:46] <temporalfox> however http2 uses a single connection

[21:52:51] <temporalfox> and http1 uses 5 connections

[21:54:01] <aesteve> that wifi…

[21:54:11] <temporalfox> now we could simulate some delay in connection

[21:54:18] <temporalfox> but it would need to be at the connection level

[21:54:22] <temporalfox> not the request level

[21:54:38] <aesteve> yeah but don't you think there's something wrong ?

[21:54:55] <aesteve> what bothers me is the major difference with the 0-latency golfing example

[21:55:08] <aesteve> especially the order the tiles are loaded in

[21:55:30] <temporalfox> I think it is normal with low latency localhost

[21:55:34] <temporalfox> for the speed

[21:55:43] <temporalfox> the order I don't know how browser do that :-)

[21:55:50] <aesteve> Idk either

[21:55:52] <temporalfox> there are tools in chrome to simulate latency

[21:55:56] <aesteve> idd

[21:56:40] <temporalfox> press F12

[21:57:02] <temporalfox> but I don't have F12 key :-)

[21:57:11] <temporalfox> but chrome canary

[21:57:25] <aesteve> there's a small difference

[21:57:32] <aesteve> 6.3s vs 7s

[21:57:47] <aesteve> with what they call “good 2G”

[21:58:39] <temporalfox> how do you run that ?

[21:58:41] <temporalfox> I'm not able

[21:58:45] <temporalfox> chrome 48

[21:58:57] <aesteve> network tab within chrome dev tools

[21:59:09] <aesteve> throttling combo-box → Good 2G

[21:59:19] <temporalfox> ah I see it

[22:00:17] <temporalfox> yes like you

[22:00:20] <temporalfox> but that's normal

[22:00:22] <temporalfox> imho

[22:00:32] <temporalfox> the main diff is 1 connection versus 5 connections

[22:00:49] <aesteve> one difference I see in the source code is that they're using <p> where I'm using a table

[22:00:52] <aesteve> but…

[22:00:56] <temporalfox> also there is a default limit with http settings

[22:02:38] <aesteve> 58s vs 70s with the worst settings (gars)

[22:02:47] <aesteve> gprs

[22:03:18] <temporalfox> as I said the main difference is about 5 versus 1 connection

[22:03:26] <temporalfox> it could be possible that there is a limit with http2

[22:03:29] <temporalfox> http2 has settings

[22:03:39] <temporalfox> that limit the number of simultaneous requests

[22:03:52] <temporalfox> maxConcurrentStreams

[22:04:04] <temporalfox> we can try a different maxConcurrentStreams setting

[22:04:06] <temporalfox> higher

[22:04:13] <temporalfox> it's in httpserveroptions

[22:04:56] <temporalfox> options.setHttp2Setting(new Http2Settings().setMaxConcurrentStreams(1000))

[22:05:03] <temporalfox> try this to see what it gives

[22:05:09] <temporalfox> (I don't know the default value)

[22:09:00] <jtruelove_> temporalfox: quick question (or maybe not) - reading through cassandra driver docs I see in 3.0 that it is suggested that you create only one cassandra session per keyspace (most apps use just one keyspace). How vertx-cassandra currently works is you associate a cassandra session to a vertx instance (so say each verticle instance has a cassandra session). If

[22:09:01] <jtruelove_> you moved away from this where one cassandra session was shared by all event loop instances what would be the right way to handle this? Should the vertx cassandra wrapper keep track of which event loop requested the db call and runOnContext on that specific event loop or will the right thing get done using the same event loop to always marshall responses

[22:09:01] <jtruelove_> back onto (ie use the vertx instance that you initialized the session with originally)?

[22:09:14] <temporalfox> aesteve for 4G it makes a difference

[22:09:33] <temporalfox> 750 ms versus 1000ms

[22:10:48] <aesteve> that's kinda crazy

[22:10:56] <aesteve> I got 3.7s vs 4s

[22:11:13] <temporalfox> j:-)

[22:11:24] <aesteve> did you do anything special with the JVM settings ?

[22:11:29] <temporalfox> no

[22:11:45] <aesteve> I don't get it >_<

[22:11:45] <temporalfox> there is a good reason chrome uses 5 connections in http1/1

[22:12:05] <temporalfox> it's not surprising that's what I'm saying

[22:12:20] <temporalfox> the difference is actually on the server not much on the client

[22:15:15] <aesteve> yeah but the code uses a single event loop, no ?

[22:15:49] <temporalfox> aesteve one diff with the gopher example is that your image are quite big

[22:16:02] <temporalfox> yes but it means 5 connections

[22:16:17] <temporalfox> they are twice bigger than gopher

[22:16:26] <temporalfox> you are avg 1.2k

[22:16:35] <temporalfox> goper is around 700b

[22:16:41] <temporalfox> I think it makes a diff

[22:17:19] <aesteve> maybe :)

[22:17:51] <temporalfox> look at the connection timeline

[22:17:56] <temporalfox> with goper in http 1

[22:19:58] <aesteve> mmh yes I see 5 simultaneous connections

[22:21:14] <aesteve> and there's just one in my HTTP1.1 example

[22:21:21] <aesteve> 5 at first, then just one

[22:24:30] <aesteve> that wifi is driving me nuts, sorry temporalfox

[22:25:07] <temporalfox> ok :)

[22:28:06] <aesteve> but yes looking at the connections timeline there's a major difference

[22:28:57] <jtruelove_> temporalfox you see the above question?

[22:29:01] <temporalfox> I am going to try an higher default initial window size

[22:29:08] <temporalfox> jtruelove_ yes sorry

[22:29:18] <temporalfox> I'm clueless about cassandra

[22:29:26] <temporalfox> where is the doc ?

[22:29:42] <aesteve>

[22:30:03] <jtruelove_> it's not really about cassandra as much as it is about getting responses back from a DB onto the vertx context that initiated them

[22:30:03] <temporalfox> so you mean that vertx-cassandra could share the same session for all clients in a vertx app ?

[22:30:12] <temporalfox> ah ok

[22:30:14] <jtruelove_> yeah

[22:30:16] <temporalfox> yes it should do that

[22:30:19] <temporalfox> too

[22:30:39] <temporalfox> what is this link aesteve ?

[22:30:52] <jtruelove_> so in order to do that then for every db call you need to capture the context of which vertx instance is making the call

[22:31:09] <aesteve> with https and my example

[22:31:15] <jtruelove_> is that how the shared connections with the jdbc stuff?

[22:31:39] <aesteve> and here : the gophertiles

[22:31:52] <temporalfox> DEFAULT_WINDOW_SIZE is 65K

[22:32:05] <temporalfox> jtruelove_ yes

[22:32:12] <temporalfox> but most of the time it will be the same context

[22:32:27] <temporalfox> jtruelove_ for jdbc

[22:32:36] <temporalfox> I think we capture the context per connection

[22:32:48] <temporalfox> aesteve I'm going to try higher DEFAULT_WINDOW_SIZE

[22:32:56] <aesteve> maybe

[22:33:05] <jtruelove_> how the vertx wrapper works now is it captures the vertx instance in the session, so each instance creates a session

[22:33:06] <aesteve> but do you have in the same behavior as mines ?

[22:33:07] <temporalfox> I think it *could* make a diff

[22:33:24] <jtruelove_> and you are always marshalling back onto the same context

[22:34:02] <jtruelove_> if you started a verticle with 8 instances and you only had one session which vertx instance (eventloop) would you associate to the cassandra session etc..

[22:34:45] <temporalfox> aesteve I know something that makes a difference

[22:34:51] <temporalfox> in http1.1

[22:34:56] <temporalfox> it will reuse the same connections

[22:35:03] <temporalfox> than in previous display

[22:35:22] <temporalfox> if I restart the server

[22:35:56] <temporalfox> http 1.1 take 1432

[22:36:03] <temporalfox> if I refresh 1025

[22:36:16] <temporalfox> and http 2.2 take 770ms the first time

[22:36:26] <temporalfox> and almost the same the second

[22:36:50] <temporalfox> jtruelove_ now I'm all listening to you :-)

[22:36:58] <temporalfox>

[22:36:59] <temporalfox> ?

[22:37:07] <aesteve> that's an explanation temporalfox idd

[22:38:09] <jtruelove_> lol alright :)

[22:38:13] <jtruelove_> yeah that is the one

[22:38:24] <jtruelove_> so say i start my server with 8 instances

[22:38:56] <jtruelove_> will running Context ctx = vertx.getOrCreateContext(); with a ref to anyone of those instances of vertxImpl give me the right context?

[22:39:14] <jtruelove_> to pass the results back on, looking at the jdbc code that seems to be the implication

[22:39:31] <temporalfox> where is this code ?

[22:39:33] <temporalfox> what class ?

[22:39:42] <temporalfox> ?

[22:39:45] <jtruelove_>

[22:39:50] <jtruelove_> that's the jdbc code

[22:40:02] <jtruelove_> i'll show you the cassandra code shortly let me find it one sec-

[22:40:12] <temporalfox> ok

[22:41:26] <jtruelove_> this class is used to return results

[22:41:41] <temporalfox> that seems similar

[22:42:21] <jtruelove_> so given only one CassandraSession and one reference to vertx will that do the right thing

[22:42:37] <temporalfox> yes it will callback on the caller context

[22:42:52] <jtruelove_> so vertx.getOrCreateContext() grabs the right event loop

[22:42:53] <temporalfox> getOrCreateContex() inspects the current threa

[22:42:58] <temporalfox> yes

[22:43:03] <temporalfox> and if it's a Vert.x thread

[22:43:09] <jtruelove_> perfect just wanted to make sure

[22:43:11] <temporalfox> it get the context associated with this thread

[22:43:21] <temporalfox> otherwise it create a new one

[22:43:39] <jtruelove_> because we have been using vertx-cassandra with a session per verticle instance approach which i think is wrong

[22:43:54] <jtruelove_> right but in this scenario it shouldn't be a new one

[22:44:22] <temporalfox> I hope! it's a vertx good practice if you want good performance

[22:44:42] <jtruelove_> lol, why not a new event loop for each request bro :)

[22:44:48] <jtruelove_> it's like web 3.0

[22:45:18] <jtruelove_> yeah i think the java driver changed something in the latest version because this is the first i've seen of the whole only use one session business

[22:52:25] <aesteve> thanks for your help temporalfox , hope I can come up with a more demonstrative example soon. Good night all !